Tags Posts tagged with "Data Integrity"

Data Integrity

0 221

Your network has a story to tell you about wasteful Capex practices which are likely reducing your Return on Capital.   If you are like most Operators, you may not be listening.

Much of the Telecom Industry’s recent focus has been placed on CEM and related analytics.  Certainly, customer acquisition and retention programs are critical as these drive revenue.   Network augments and migrations to new technologies are an unavoidable “price to pay” and the lion’s share of management’s attention is placed on squeezing as much revenue traffic onto pipes and spectrum as possible.

Trouble is, EBITDA margins are being squeezed for reasons that I’m sure you are all too familiar with.  Amongst many Operators with whom Subex has spoken recently, there is a growing recognition that network costs must be better managed, but also a frustration that lack of visibility and insights undermine the ability to do so.

As I said, your network has a story to tell you—in fact, many stories.  What’s more, it will give you critical information that your ERP or Asset Tracking system simply can’t.  Without this information, your ability to optimize Capex throughout the asset lifecycle can be significantly eroded.

Can you answer:

Where are my assets?

ERPs are important for managing vendor relationships, driving supply chain process and tracking warehouse inventory. Once an asset leaves the warehouse, responsibility for tracking and managing the asset typically shifts to technical OSS’s (e.g. Network Inventory).  Data quality within technical OSS’s is notoriously poor.  As a result, assets can become stranded, under-utilized and/or lost.   Consequently, Operators spend Capex that could otherwise be avoided if existing assets were effectively harvested and redeployed.

The Asset Lifecycle and Relative Positioning of ERPs vs. Technical OSS

When are my assets generating returns?

A critical capital management objective is minimizing the cash-cash cycle.  This is the interval between paying cash to a vendor, and receiving cash from a customer once an asset becomes productive (i.e. carries revenue traffic).   Each extra day in the cycle increases your cost of capital.  Reducing the cycle requires that you know the answer to:

  • How much time elapsed from the purchase of an asset until deployment in the network?
  • How much time elapsed from deployment of the asset until it became productive?

Equipped with such time-to-value analytics, finance can better hold Network Operations accountable for any excessively long intervals.  Network Operations also has the actionable information it needs to identify and correct inefficient deployment and service delivery processes.

Where did my assets go?

A very common dysfunction is mismanagement of assets once they are decommissioned or retired.  Some assets remain powered but unproductive, contributing to excessive energy costs.  Others simply disappear (whether moved, shelved or pilfered) and are no longer available for re-provisioning or salvage.  A recent PwC survey found that “one half of wireline operators and over one-third of wireless operators indicated that less than 50% of their assets are currently catalogued and managed.”  Network Intelligence enables Operators to track movement of assets in the network and provides an early alert when an asset has been removed and does not reappear elsewhere.

What assets do I need?

A critical component of avoiding unnecessary Capex is having accurate and timely Network Intelligence to guide the budgeting, forecasting and planning process.  This is especially important for portions of the network which are most sensitive to traffic growth.  It is essential to monitor resource utilization and equip planners with metrics and trending to ensure assets are purchased when needed, where needed and for the right purpose.

Introducing ROC Asset Assurance

Drawing on our industry leadership in Data Integrity Management, Capacity Management, Network Discovery and Analytics, Subex is launching ROC Asset Assurance to harness Network Intelligence throughout the asset lifecycle and do for Asset Assurance what Subex has famously done for Revenue Assurance and other business optimization areas.   Look for more exciting details on ROC Asset Assurance in the days and weeks to come.

1 48

Now, referring to the title, you may be thinking: That’s a rather cheeky thing to say given the high direct and indirect costs of errant data incurred by virtually all operators.   You might cite the significant Opex penalty related to reworking designs and to service activation fallout.   I get that.  What about the millions of USD in stranded Capex most operators have in their networks?  Check.  My personal favorite comes from Larry English, a leading expert on information quality, who has ranked poor quality information as the second biggest threat to mankind after global warming.  And here I was worried about a looming global economic collapse!

My point is actually that the discrepancies themselves have no business value.   They are simply an indicator of things gone bad.  The canary in the coal mine.    These “things” are likely some combination of people, processes and system transactions, of course.  Yet many operators make finding and reporting discrepancies the primary focus of their data quality efforts.  Let’s face it, anyone with modest Excel skills can bash two data sets together with MATCH and VLOOKUP functions  and bask in the glow of everything that doesn’t line up.  Sound familiar?

For context, I am mostly referring to mismatches between the network and how the network is represented in back-office systems like Inventory—but the observations I will share can be applied to other domains.   Data anomalies, for example, are all too common when attempting to align subscriber orders and billing records in the Revenue Assurance domain.

Too often, Data Integrity Management (DIM) programs start with gusto and end with a fizzle, placed on a shelf so that shinier (and easier!) objects can be chased.  Why is this?  Understanding that I am now on the spot to answer my own rhetorical question, let me give it a go.

  • The scourge of false positives: There are few things as frustrating as chasing one’s tail.  Yet that is the feeling when you find that a high percentage of your “discrepancies” are not material discrepancies (i.e. an object in the Network but not in Inventory) but simply mismatches in naming conventions.   A DIM solution must profile and normalize the data that are compared so as not to spew out a lot of noise.
  •  The allure of objects in the mirror that are closer than they appear:  OK, not sure this aphorism works but I trust you to hang with me.   I am referring to misplaced priorities— paying attention to one (closer, easier) set of discrepancies while ignoring another set that might yield a bigger business impact once corrected.    Data quality issues must be prioritized, with priorities established based upon clear and measurable KPI targets.  If you wish to move the needle on service activation fallout rates, for example, you need to understand the underlying root causes and be deliberate about going after those for correction.  Clearly, you should not place as much value on finding ‘stranded” common equipment cards as on recovering high-value optics that can be provisioned for new services.
  • The tyranny of haphazard correction: I’m alluding here to the process and discipline of DIM.  Filtered and prioritized discrepancies should be wrapped with workflow and case management in a repeatable and efficient manner.  The goals are to reduce the cost and time related to correction of data quality issues.  If data cleanse activities are unstructured and not monitored by rigorous reporting, the business targets for your DIM program are unlikely to be met.
  • The failure to toot one’s own horn: Let’s say that your data integrity efforts have met with some success.  Do you have precise measurements of that success?  What is the value of recovered assets?  How many hours have been saved in reduced truck rolls related to on-demand audits?  Have order cycle times improved?  By how much?   Ideally, can you show how your DIM program has improved metrics that appear on the enterprise scorecard?   It is critical that the business stakeholders and the executive team have visibility to the value returned by the DIM program.  Not only does this enable continued funding but it could set the stage for “self-funding” using a portion of the cost savings.
  • The bane of “one and done”:  For a DIM program to succeed in the long run, I suggest drawing from forensic science and tracing bad data to underlying pathologies… i.e. people, process and/or system breakdowns.   A formal data governance program that harnesses analytics to spotlight these breakdowns and foster preventive measures is highly recommended. The true power of DIM is in prevention of future data issues so that the current efforts to cleanse data will not simply be erased by the passage of time.

Identifying data discrepancies is a good first step.  Correcting and preventing them is even better.    Institutionalizing DIM via continuously measuring and reporting your successes… well, you get the idea.

Follow Us