Network Discovery and Analytics – The Evolution (Part 2)
In continuation to my previous blog on Network Discovery and Analytics, which mostly revolved around some of the core but basic functionality that a Discovery system needs to possess. In this post lets discuss some of the key challenges that discovery systems have to face when they first get deployed.
1) Resistance from network operations teams.
- Network operations for example would like for the network not to be disturbed during peak traffic hours or during maintenance windows. Towards that intent, the Network operations teams would like to be confident that a discovery system can be set up such that the system will at no cost touch the network for a configured duration. The network operations team would expect to have confidence in the discovery system that irrespective of the stage at which a network polling activity is in, the activity needs to be suspended on the event of hitting a black out window.
- Non – intrusive discovery is what a network operations team would be confident about. By non-intrusive discovery, I mean that a device which is already overloaded with respect to its resource consumption should not be further affected by querying for information to perform a discovery
2) Resistance from IT teams managing N/EMS systems.
- In the event of discovery from NBI of N/EMS or from gateways that maintain stateful sessions, in a lot of situations it would be required of the discovery system to not consume more than a configured number of sessions. This may be required to make sure that other northbound OSS systems a not denied a session request from the N/EMS or gateways. It may also be because only a few NBI sessions have been purchased from the equipment vendor and using more number of sessions than purchased may lead to a contract violation or error in discovery.
The OSS Network discovery applications have to evolve and stand up against the above challenges by introducing functionality like blackout support and network hit throttling capability.
Moving forward to the next stage of the evolution. Tier-1 or Tier-2 operators have discovery systems deployed and use this information to keep inventories in sych with the real world. But inventories typically store services or end-to-end circuits. And what is discovered are individual components of a service or circuit which have been provisioned within a network element. It is important from the perspective of the service provider to view the current state of a circuit in the Network, compare it with the inventory to fix misalignment. This brings in the responsibility within the Discovery system to be able to assimilate service components discovered within each network element along with the network element interconnectivity information and be able to plot end to end circuits for various technologies.
As of today, a few tier 1 operators who are pretty mature in their process are looking towards evolving the discovery system’s capabilities to be in near real time synch with the state of the network. A system which is able to listen to traps/events from the network and refresh its database with the latest state of the network. And use the near real time discovery system to evolve the inventories to be near real time up-to-date with the reality in the network.
It’s another thing that a lot of inventories even today are far from accurate despite a whole lot of tools out there in the market specifically designed to solve data integrity issues of Inventories. And the reason for that is the lack of a sound practice around the usage of these tools or a lack of commitment to adhere to a data integrity program or a failed OSS inventory transformation project etc
In the interim, while operators are trying to get their inventories cleaned up we believe that the Discovery system along with intelligent and actionable analytics has a lot more to offer to the planning and service assurance teams which has been mentioned below.
1) The discovered physical fabric can be compared and/or enriched with OSS inventories and ERP systems to build a 3600 view the asset. The 3600 view when recoded for a period of time becomes a very power source to a bunch of analytical function which would provide actionable intelligence to planning and network operation teams in helping with Capex avoidance.
2) The discovered logical/service component information when captured for a period of time can again be a source to bunch of other analytical functions (Time series trending and forecasting, What-If modeling and optimization) to help network operations and planning teams to perform network capacity management on accurate and as discovered information.
3) The discovered information when assimilated with topological analytics to calculate end-to-end circuits/services can be a powerful source of information to Service assurance teams to overlay alarms and generate a service impact view in their NOC.
“Avinash is a seasoned professional with over 13 years of experience in the Telecommunication B/OSS space. Avinash is a graduate from Illinois Institute of Technology with a masters in Telecommunications and Software engineering. He currently heads Product Management for the Network Analytics suit of Products at Subex. Avinash has experience defining strategy and roadmap for products ranging from products which have been in the market for over a decade to launching a brand new product to the Market. Prior to moving into the Product ownership role, Avinash has played multiple roles within the product engineering, delivery and pre-sales organizations like Product Architect, Solutions Architect, Technical Sales etc. An active member of TM Forum, he is working closely with the Asset Management group of TM Forum in defining business process standards for Asset Management and assurance practices of Telecom Service Providers.”