Tags Posts tagged with "OTT"

OTT

0 38

As we dive deeper into the world of analytics, more and more information and intelligence is being made available to operators, analysts, and other interested parties. But along this same progression of “seeming” advancement in the domain, there is also a growing, critical threat to large tier operators: Access.

Access itself is a big word. In the context above it simply means “access to analytics intelligence”. But the other meanings are where the problem exists for larger tier operators. When you think of “access” in terms of these areas, the problem becomes more clear:

1. Access to where your data is located – where is it? All of it?

2. Access to the group(s) who control that data – can you get to those people? Do you even know who they are?

3. Access to the data itself – assuming you know where it is and who owns, can you even get to it?

As history proves out again and again, these sets of questions become increasingly harder to answer as the size of the operator continues to get bigger. Taking the approach of a larger vs. a smaller operator, here’s what is all-too-often the case:

Where is the data? In smaller operators, data tends to be located in singular systems. Put more simply, they don’t have 4 billing systems for retail customers, 3 inventory platforms, and 6 order entry instances. They tend to be closer to one of each type. Obviously the younger the operator, the more advantages they have as well, simply because infrastructure and architecture are more simplified. If they are a larger tier operator, however, they are often quite the opposite. In addition to having multiple systems that are redundant or duplicated, they often have customers scattered across these instances with no particular rhyme or reason, as several system consolidation events have spread any given customer’s account in unusual directions. One of the biggest challenges in this regard is getting to a true “single view” of an account. Some of the largest carriers in the world have tried to get there – they remain unsuccessful, and ironically, they often don’t even realize it themselves.

Who controls the data? So let’s say you know where your data is. Can you get to it? Do you need a budget available to *pay* the internal group to get you your own data? But most critically: When you show the data owners what you need and why you need it, can you successfully escape the attempt by those owners to create the analytics for you? Many of these shops that own data also have lighter-weight analytics capabilities. They would love for you to engage them to build something for you. They would love for you to pay their department for their efforts. They would love for you to not use an outside expert, as this is often *perceived* as a direct threat to them (this is NOT the case, but they still respond like it is). They would love for you to educate them on what you want and how to do it. But most importantly, they’ll sometimes make getting your data so difficult and expensive, that it becomes more financially beneficial to just do it their way. This never gets you the result you expect, on time, anywhere close to budget.

Contrast this with a smaller operator. The smaller the operator, the more collaboration takes place. That’s not a judgement…it’s just the unfortunate truth (for the large tiers). Of all the operators I’ve worked on project deliveries directly with over the years, this has held true each and every time. When a finance guy needs a data dump from the accounting platform, he doesn’t make a phone call to a different region, through a path of 3-10 people to get to the right person. What he does do is walk down the hall, around the corner, into the IT guy’s office, sits down and has a few minutes of “small talk”, asks for the data at the end of those few minutes, and usually has it by the next day. Also – smaller operators don’t believe they can do it all. They already wear 10 hats during the day, and welcome an expert team in analytics to help drive immediate value back into their business, almost form the day they walk in the door.

The simple fact of the matter is this: Analytics is going to be a crucial differentiating point for operators’ speed (agility) of response to business and market changes. And when you cut to the chase, data access will make this practice virtually impossible for large tier, legacy operators to fully leverage when compared to the smaller organizations. This will cripple many large operators in some key, lucrative segments of their markets. How can we be so sure? It’s already happening…

1 76

As I interact with more and more service providers about their network capacity issues, I’ve become sure about one thing – what worked before, isn’t really working anymore.  The CapEx requirement for network equipment just to keep up with the exponential growth in data traffic (i.e., Data Tsunami) is still not getting them ahead of significant congestion issues and customer impacting events.  Why? Traditional capacity management paradigms are not working.

Essentially, feedback from carriers of all sizes and types has exposed one of the most significant shifts in thinking regarding how to go about managing and planning for network capacity.  They know that the rules are all changing and today’s content demands are outpacing the CSPs ability to keep pace.  The first key question is how to get back in front of the capacity demand (we’ll talk about monetization next…stay tuned).  So, why aren’t today’s processes scaling?

  • CSPs use a multitude of human resources and manual processes to manage network capacity.   This may have scaled under slower and more predictable capacity growth curves, but thanks to services like You-Tube & Netflix, entire network capacity is shifting in quantum leaps.
  • Solutions provided by equipment vendors are often platform specific, and reinforce a silo approach to Capacity Management when a holistic view is needed.  Service demand congestion is a network phenomenon which doesn’t care about individual equipment vendors or devices.
  • CSP planning groups leverage data and make decisions based on systems which have 20 – 40% inaccuracy in comparison to the actual capacity availability in the network.
  • Today’s CSP solution approach is often homegrown where 90% of the time is spent on acquiring and understanding raw data.  As a whole, everyone is trying to answer the question of how to proactively eliminate the possibility of congestion, but most are still focused on addressing the symptoms and not preventing the problem

It is surprising to note that even top tier/technology leaders cannot accurately predict where and when capacity issues will impact their networks.  This lack of visibility hurts CSPs considerably because as per our own studies, network events are behind can account for up to 50% of customer churn in high value mobile data services.

Flattening the CapEx curve

And the Capacity Management problem doesn’t really end there; in many ways it’s like a supply chain process. Marketing owns the function of forecasting where service uptake will drive capacity needs across the network. When Marketing underestimates service uptake, there is a real and significant impact to potential revenue: On average, it can take about 3 months from when capacity is fully tapped in a Central Office (CO) to when new capacity can be added to your network.  During that time, customers expecting service availability become hugely frustrated and begin to churn.  Engineering groups are pushed into panic-mode, trying to react as fast as possible – often putting capacity in the wrong places due to inaccurate data – resulting in further congestion, service degradation, an inefficient use of capital.

The message from CXO’s is crystal clear – there is an urgent and dire need to find new ways of monetizing the data crossing their networks. This need is exacerbated with OTT content and net-neutrality. SLA and authentication based revenue models are absolutely dependent on knowing what types of content/services are traversing your network, how much capacity they consume, and how utilization is driven by your consumer’s interests and activities.  This type of analysis requires a critical and intelligent binding of network and services data with business data to truly assess the financial impact to the CSP. Many Business Intelligence (BI) solution leaders will lay claim to abilities here, but actually fall very short of the mark.  Instead, real experience suggests that solutions in the marketplace today either:

  • Can handle the financial aspects of your business but have no understanding of today’s network dynamics in terms of capacity issues and services;
  • Can handle parts of your network very deeply, but do not correlate or provide a holistic view at the service level; or,
  • Can collect some network and service level information, but have no ability to incorporate business data to understand the impact to the business – i.e,. cost, subscriber behavior, propensities

All the above challenges bring us to the inevitable question – what kind of approach does one take in order to tackle capacity management issues? How does one stop chasing traffic and focus on flattening the CapEx curve instead? In order to attain ‘Capacity Management Nirvana‘, a proactive and scalable approach needs to be adopted by CSPs. An approach which not only intelligently binds network and business strategies based on the Data Tsunami realities but also brings proactive and predictive capacity management to the table. At the end of the day, a CSP should have access to all their capacity, the ability to leverage real and immediate feedback on the change in capacity as service uptake increases, and finally, the right tools and intelligence to get in front of what’s coming.

To know more about how a Capacity Management solution can help you address the above issues, download the whitepaper “Energizing Smart Growth with Network Intelligence

Follow Us