Tag Archives: Data Tsunami

Plain Sailing on a Sea of Data

Running a business can be like sailing a ship.  It’s not enough to set a course and expect the ship to just get there.  The sea can be rough, the weather unpredictable, and just under the surface there may be hidden obstacles. Any number of problems can push you off course, or, at worst, sink your ship completely.  A sea of data washes around the business which can either drag you under, or carry you swiftly to your destination.  So what can you do to ensure a safe arrival at your destination?  The following checklist will help you get ready for your journey.

A strong ship

You will need a ship that can sail in all weathers, and has a reliable track record for sailing long distances.

Subex has a pedigree of providing enterprise strength software to the telecoms industry for over 20 years. Subex ROC can crunch billions of call records daily

A clear destination

Without an agreed destination your business will drift aimlessly and will eventually run aground.  By setting clear objectives everyone can work together to ensure the ship stays the course.

Subex consultants can help to plan for your voyage and set a course for you planned destination. Subex Analytics help to keep a business on course by using historic data to predict which direction a business is heading.

A good map

Your map is your business strategy, but this shouldn’t be set in stone.  A business strategy must adapt to the changing conditions and business leaders must remain vigilant to make course corrections when necessary.

Subex ROC provides has a modularised framework that can be quickly adapted to handle new challenges or changes in course.  The course to follow can be mapped out in KPIs that provide constant visual feedback to inform you if your business goes off course.

A navigator who knows the way

Pick the right currents your ship will be carried quickly to your destination, but getting caught by the wrong ones can set you miles off course.  A business embarking on a journey into the unknown needs an experienced navigator to guide the way.

Subex consultants have the experience to guide a corporation across open water or narrow straits of competition, regulatory requirements and market demands.

As soon as sailors began to navigate beyond the sight of land then they realised they needed more than just the stars to find their way. Compasses, Nautical maps, sextants and marine chronometers all advanced the quality of analytics available to navigators, but until radar, LORAN and GPS arrived in the late 20th century sailors still relied largely upon dead reckoning and gut instinct to find the way, and their journeys would often end tragically.  Now Subex analytics can provide the same kind of objective view of where a business is heading that GPS provides for sailors.  Subex analytics can see through the fog and turbulent waves of data that flood into an organisation and threaten to knock you off course.  Using advanced time series forecasting, correlation, what if modelling, and pin sharp visualisations Subex ROC can navigate through a sea of data and help to steer your business safely to its planned destination.

Traditional Capacity Management is doomed to fail and cost CSPs millions in unnecessary CapEx!

Most CSPs today adopt a traditional capacity management approach that consists of planning their network resource requirements over the next 12 months based on past consumer trends.

Reality check! In today’s fast paced end-user consumption and service demand, trying to predict resource needs 12 months out based on past end-user behavior is like playing a lottery based on past outcomes in hopes to hit it big – More often than not you will lose big time.

The reality is that the past doesn’t predict the future anymore and that end-users are causing unpredictable shifts in resource consumption in the network as they tune into major events and build their life around real-time communications. Oh sure, many CSPs will read this and think, we have the latest probes in the network giving us loads of real-time data and complex flows where we know if packets are traveling left or right in the network. And yet with all this information CSP still can’t keep ahead of today’s data tsunami, without being concurrently choked by escalating CapEx. Having loads of low-level information more often than not causes data overload: You have so much raw data that you don’t know what it means from an overall congestion perspective without weeks or months of analysis.  Or even worse, you may interpret trends differently depending on the data sample you examine, making it virtually impossible to project congestion and business impacts. Many CSPs with whom I have spoken face the same problem: When in doubt, pour more CapEx into the network, in the hope of adding the right resources to alleviate congestion.

What if there is a way to more precisely target the CapEx spend needed in the network, to deliver the services the CSPs need to thrive?  In fact, there is, and it’s called “Real-time Capacity Analytics”!

Real-time Capacity Analytics is about understanding all capacity-related data rather than looking at it on a per attribute or device perspective – which provides little more clarity than just a blip in an ocean of traffic – instead looking at capacity consumption as it relates to end-to-end path and services to end-users. It is amazing how CSPs are concerned about how capacity congestion affects their subscribers and yet most solutions today fail to look at capacity from an end-to-end end-user perspective. Without an end-to-end view and understanding of how different segments of the network path and services affect congestion, CSPs may be spending CapEx in portions of the network that may temporarily relieve the symptoms of congestion rather than resolving the root cause.

So as a CSP, the next time you face customer impact based on congestion, ask yourself: Did I see it coming? Did I get the right warning signs that congestion was building up over time? Did I get a read of time to exhaustion that could have helped me plan added capacity before impacting my customers? Is my solution pinpointing where to target my Cap Ex? And finally, do I have a solution that can tell me if my network can accommodate new subscribers or services, and, if not, where will the congestion hot-spots occur and how much Cap Ex is needed?

If your current solution isn’t helping you answer any of the above questions, it is time to consider Real-Time Analytics for Capacity Management before your business gets swept away by the tides of capacity congestion.

Could Analytics Pose a Life Threat to Large Tier Operators?

As we dive deeper into the world of analytics, more and more information and intelligence is being made available to operators, analysts, and other interested parties. But along this same progression of “seeming” advancement in the domain, there is also a growing, critical threat to large tier operators: Access.

Access itself is a big word. In the context above it simply means “access to analytics intelligence”. But the other meanings are where the problem exists for larger tier operators. When you think of “access” in terms of these areas, the problem becomes more clear:

1. Access to where your data is located – where is it? All of it?

2. Access to the group(s) who control that data – can you get to those people? Do you even know who they are?

3. Access to the data itself – assuming you know where it is and who owns, can you even get to it?

As history proves out again and again, these sets of questions become increasingly harder to answer as the size of the operator continues to get bigger. Taking the approach of a larger vs. a smaller operator, here’s what is all-too-often the case:

Where is the data? In smaller operators, data tends to be located in singular systems. Put more simply, they don’t have 4 billing systems for retail customers, 3 inventory platforms, and 6 order entry instances. They tend to be closer to one of each type. Obviously the younger the operator, the more advantages they have as well, simply because infrastructure and architecture are more simplified. If they are a larger tier operator, however, they are often quite the opposite. In addition to having multiple systems that are redundant or duplicated, they often have customers scattered across these instances with no particular rhyme or reason, as several system consolidation events have spread any given customer’s account in unusual directions. One of the biggest challenges in this regard is getting to a true “single view” of an account. Some of the largest carriers in the world have tried to get there – they remain unsuccessful, and ironically, they often don’t even realize it themselves.

Who controls the data? So let’s say you know where your data is. Can you get to it? Do you need a budget available to *pay* the internal group to get you your own data? But most critically: When you show the data owners what you need and why you need it, can you successfully escape the attempt by those owners to create the analytics for you? Many of these shops that own data also have lighter-weight analytics capabilities. They would love for you to engage them to build something for you. They would love for you to pay their department for their efforts. They would love for you to not use an outside expert, as this is often *perceived* as a direct threat to them (this is NOT the case, but they still respond like it is). They would love for you to educate them on what you want and how to do it. But most importantly, they’ll sometimes make getting your data so difficult and expensive, that it becomes more financially beneficial to just do it their way. This never gets you the result you expect, on time, anywhere close to budget.

Contrast this with a smaller operator. The smaller the operator, the more collaboration takes place. That’s not a judgement…it’s just the unfortunate truth (for the large tiers). Of all the operators I’ve worked on project deliveries directly with over the years, this has held true each and every time. When a finance guy needs a data dump from the accounting platform, he doesn’t make a phone call to a different region, through a path of 3-10 people to get to the right person. What he does do is walk down the hall, around the corner, into the IT guy’s office, sits down and has a few minutes of “small talk”, asks for the data at the end of those few minutes, and usually has it by the next day. Also – smaller operators don’t believe they can do it all. They already wear 10 hats during the day, and welcome an expert team in analytics to help drive immediate value back into their business, almost form the day they walk in the door.

The simple fact of the matter is this: Analytics is going to be a crucial differentiating point for operators’ speed (agility) of response to business and market changes. And when you cut to the chase, data access will make this practice virtually impossible for large tier, legacy operators to fully leverage when compared to the smaller organizations. This will cripple many large operators in some key, lucrative segments of their markets. How can we be so sure? It’s already happening…

A Thorn in the Side? – Handling Over-the-top Content Demands

As I interact with more and more service providers about their network capacity issues, I’ve become sure about one thing – what worked before, isn’t really working anymore.  The CapEx requirement for network equipment just to keep up with the exponential growth in data traffic (i.e., Data Tsunami) is still not getting them ahead of significant congestion issues and customer impacting events.  Why? Traditional capacity management paradigms are not working.

Essentially, feedback from carriers of all sizes and types has exposed one of the most significant shifts in thinking regarding how to go about managing and planning for network capacity.  They know that the rules are all changing and today’s content demands are outpacing the CSPs ability to keep pace.  The first key question is how to get back in front of the capacity demand (we’ll talk about monetization next…stay tuned).  So, why aren’t today’s processes scaling?

  • CSPs use a multitude of human resources and manual processes to manage network capacity.   This may have scaled under slower and more predictable capacity growth curves, but thanks to services like You-Tube & Netflix, entire network capacity is shifting in quantum leaps.
  • Solutions provided by equipment vendors are often platform specific, and reinforce a silo approach to Capacity Management when a holistic view is needed.  Service demand congestion is a network phenomenon which doesn’t care about individual equipment vendors or devices.
  • CSP planning groups leverage data and make decisions based on systems which have 20 – 40% inaccuracy in comparison to the actual capacity availability in the network.
  • Today’s CSP solution approach is often homegrown where 90% of the time is spent on acquiring and understanding raw data.  As a whole, everyone is trying to answer the question of how to proactively eliminate the possibility of congestion, but most are still focused on addressing the symptoms and not preventing the problem

It is surprising to note that even top tier/technology leaders cannot accurately predict where and when capacity issues will impact their networks.  This lack of visibility hurts CSPs considerably because as per our own studies, network events are behind can account for up to 50% of customer churn in high value mobile data services.

And the Capacity Management problem doesn’t really end there; in many ways it’s like a supply chain process. Marketing owns the function of forecasting where service uptake will drive capacity needs across the network. When Marketing underestimates service uptake, there is a real and significant impact to potential revenue: On average, it can take about 3 months from when capacity is fully tapped in a Central Office (CO) to when new capacity can be added to your network.  During that time, customers expecting service availability become hugely frustrated and begin to churn.  Engineering groups are pushed into panic-mode, trying to react as fast as possible – often putting capacity in the wrong places due to inaccurate data – resulting in further congestion, service degradation, an inefficient use of capital.

The message from CXO’s is crystal clear – there is an urgent and dire need to find new ways of monetizing the data crossing their networks. This need is exacerbated with OTT content and net-neutrality. SLA and authentication based revenue models are absolutely dependent on knowing what types of content/services are traversing your network, how much capacity they consume, and how utilization is driven by your consumer’s interests and activities.  This type of analysis requires a critical and intelligent binding of network and services data with business data to truly assess the financial impact to the CSP. Many Business Intelligence (BI) solution leaders will lay claim to abilities here, but actually fall very short of the mark.  Instead, real experience suggests that solutions in the marketplace today either:

  • Can handle the financial aspects of your business but have no understanding of today’s network dynamics in terms of capacity issues and services;
  • Can handle parts of your network very deeply, but do not correlate or provide a holistic view at the service level; or,
  • Can collect some network and service level information, but have no ability to incorporate business data to understand the impact to the business – i.e,. cost, subscriber behavior, propensities

All the above challenges bring us to the inevitable question – what kind of approach does one take in order to tackle capacity management issues? How does one stop chasing traffic and focus on flattening the CapEx curve instead? In order to attain ‘Capacity Management Nirvana‘, a proactive and scalable approach needs to be adopted by CSPs. An approach which not only intelligently binds network and business strategies based on the Data Tsunami realities but also brings proactive and predictive capacity management to the table. At the end of the day, a CSP should have access to all their capacity, the ability to leverage real and immediate feedback on the change in capacity as service uptake increases, and finally, the right tools and intelligence to get in front of what’s coming.

To know more about how a Capacity Management solution can help you address the above issues, download the whitepaper “Energizing Smart Growth with Network Intelligence

Get Started with Subex