menu-close
search-icon
banner

All posts by Avinash Ganesh

The SDN & NFV world: Things not to lose sight of!

Natural human tendency is to focus more on things that affects our present rather than the future. Coming to things that affect us for which there is a lack of awareness about the “extent” of issue caused, there would be little or no attention from us to resolve them.

Let us look at the priority with which investment decisions are made by telecom service provider on the software systems that they wish to have. Before a service provider can go live with a product offer for their end customers, they need to have the network in place to support the product. The priority for the software systems they should have to support their products are:

  • The first and most important is the billing software. They do not want go wrong in the billing as it would directly affect their revenues.
  • Second is the assurance software to make sure the network and the services are up and running.
  • The fulfillment software to automate as much as possible the order to activation process
  • A software which will help in strategic decision making like a planning software.

Above are the verticals, now let us look at the horizontals which are the domain.

  • The first and most important aspect that is recorded well is that of the end customer. The entire lifecycle of a customer, right from acquisition to end of service to the customer in most operator environments is maintained well.
  • The second would be product life cycle management. This is important to know what needs to be billed based on the product that the customer is using.
  • The next important aspect is the maintenance of service life cycle.
  • By the time Telecom service provider gets all of the above going and fully operational, it is already a mammoth task for them and they tend to lose focus on Resource lifecycle management.

A recent survey that was conducted by TMForum led by Subex revealed the following findings:

  • 1 in 3 operators do not measure returns on CAPEX investment
  • 77% of the respondents believed that inadequate asset utilization leads to increase in costs
  • 55% of the respondents believed that network planning is based on guesses
  • 64% believed that capex planning is driven by technology and not business objectives

From the above findings it is clear that getting the right business process and tools around resource life cycle management is extremely critical for the long term health and efficient operations of a telecom service provider.

In this blog, I would like to discuss about the exciting new world of SDN, NFV and cloud technologies and the relevance of resource lifecycle management in this new world. While a part of the telecom operator community is very aggressively embracing the concepts of SDN and NFV already into their network, there are others who are waiting and watching to see how things progress. I strongly believe, for the telecom industry to break the shackles of “reducing margins” and “increase in the need of CAPEX/OPEX investments” that it is currently facing, the key answers can be provided by SDN, NFV and cloud technologies.

It is obvious that maximum energy is spent by telecom service providers, vendor community and standards bodies like ONF, ETSI, IETF, OPNFV etc. on how the network will work in this new world. Also, what I observed is that a bulk of the energy is being spent on defining standards around next gen BSS and OSS by TMForum, ETSI and ONF are in the following areas:

  1. Orchestrator
  2. VNF Manager
  3. VI (Virtualized Infrastructure) Manager
  4. SDN controllers
  5. Network and Application adapters
  6. Protocols used for communication with the devices and applications
  7. Policy engine
  8. APIs etc.

As we go about defining the standards, let us look at covering the life cycles of all the domains starting from Customer life cycle, product life cycle, service life cycle all the way to resource life cycle. In this new world, resources can be physical compute, storage and network resources or virtual resources like software licenses. Let us not restrict ourselves in defining standards only on the operational aspects of resource life cycle management (OSS inventory) which was done in the eTOM model of TM Forum. Some work is being done by one group under the ZOOM initiate of TM Forum to define standards on onboarding of the software resource. This is definitely good, but we need to cover all aspects of the life cycle right from onboarding till end of life.

So what if we do not do it, the systems will work, be operational and deliver services to the end customer. But we will probably end up being in the same state that we are in today, i.e. not being able to monitor how the CAPEX decisions of the past have fared, optimize on the investments already made, learn and improve in order to make better CAPEX decisions going forward.

I would like to leave you with the following thought before I end my blog. If we ask any telecom service provider on the number of database or web server licenses they currently have deployed in their data centers, they may or may not have an answer. But if we go to the extent of asking how many of these licenses are in use and in how many cases we have a compliance issue, I am pretty certain that almost all of them will not have a precise answer to the question. Going forward, if all the network functions are going to be software running on COTs hardware, the need to have answers to the above questions will be even more important.

Join the webinar on ” Telecom Asset Management in SDN & NFV world” to discuss more.

Network Discovery and Analytics – The Evolution (Part 2)

In continuation to my previous blog on Network Discovery and Analytics, which mostly revolved around  some of the core but basic functionality that a Discovery system needs to possess. In this post lets discuss some of the key challenges that discovery systems have to face when they first get deployed.

1)      Resistance from network operations teams.

  • Network operations for example would like for the network not to be disturbed during peak traffic hours or during maintenance windows. Towards that intent, the Network operations teams would like to be confident that a discovery system can be set up such that the system will at no cost touch the network for a configured duration. The network operations team would expect to have confidence in the discovery system that irrespective of the stage at which a network polling activity is in, the activity needs to be suspended on the event of hitting a black out window.
  • Non – intrusive discovery is what a network operations team would be confident about. By non-intrusive discovery, I mean that a device which is already overloaded with respect to its resource consumption should not be further affected by querying for information to perform a discovery

2)      Resistance from IT teams managing N/EMS systems.

  • In the event of discovery from NBI of N/EMS or from gateways that maintain stateful sessions, in a lot of situations it would be required of the discovery system to not consume more than a configured number of sessions. This may be required to make sure that other northbound OSS systems a not denied a session request from the N/EMS or gateways. It may also be because only a few NBI sessions have been purchased from the equipment vendor and using more number of sessions than purchased may lead to a contract violation or error in discovery.

The OSS Network discovery applications have to evolve and stand up against the above challenges by introducing functionality like blackout support and network hit throttling capability.

Moving forward to the next stage of the evolution. Tier-1 or Tier-2 operators have discovery systems deployed and use this information to keep inventories in sych with the real world. But inventories typically store services or end-to-end circuits. And what is discovered are individual components of a service or circuit which have been provisioned within a network element. It is important from the perspective of the service provider to view the current state of a circuit in the Network, compare it with the inventory to fix misalignment. This brings in the responsibility within the Discovery system to be able to assimilate service components discovered within each network element along with the network element interconnectivity information and be able to plot end to end circuits for various technologies.

As of today,  a few tier 1 operators who are pretty mature in their process are looking towards evolving the discovery system’s capabilities to be in near real time synch with the state of the network. A system which is able to listen to traps/events from the network and refresh its database with the latest state of the network. And use the near real time discovery system to evolve the inventories to be near real time up-to-date with the reality in the network.

It’s another thing that a lot of inventories even today are far from accurate despite a whole lot of tools out there in the market specifically designed to solve data integrity issues of Inventories. And the reason for that is the lack of a sound practice around the usage of these tools or a lack of commitment to adhere to a data integrity program or a failed OSS inventory transformation project etc

In the interim, while operators are trying to get their inventories cleaned up we believe that the Discovery system along with intelligent and actionable analytics has a lot more to offer to the planning and service assurance teams which has been mentioned below.

1)      The discovered physical fabric can be compared and/or enriched with OSS inventories and ERP systems to build a 3600 view the asset. The 3600 view when recoded for a period of time becomes a very power source to a bunch of analytical function which would provide actionable intelligence to planning and network operation teams in helping with Capex avoidance.

2)      The discovered logical/service component information when captured for a period of time can again be a source to bunch of other analytical functions (Time series trending and forecasting, What-If modeling and optimization) to help network operations and planning teams to perform network capacity management on accurate and as discovered information.

3)      The discovered information when assimilated with topological analytics to calculate end-to-end circuits/services can be a powerful source of information to Service assurance teams to overlay alarms and generate a service impact view in their NOC.

Network Discovery and Analytics – The Evolution (Part 1)

Let us look at why OSS Network discovery systems came into existence, how they evolved over years and in our opinion the areas in which it would play a critical role going forward.

Most Tier-1 and Tier-2 service providers have at some point in time invested in building one or more inventory system which would work as a central repository for functions like Planning, Design, Fulfillment and Assurance. For these functions to have a reliable and up-to-date view of the network, it is important that depth and breadth of information is captured by the inventory. By breadth, I mean the ability to store information from multiple device types from multiple device vendors. And by depth I mean the ability to store physical inventory and also logical inventory for multiple technologies like PDH, SDH/Sonet, DWDM, Ethernet, ATM, FR, IP, MPLS, xDSL, FTTx, 2G, 3G, LTE etc. These Inventories have largely been maintained/updated by manual data entry and in the event of weak process or lack of discipline in following processes; these systems would very quickly go out of synch with the network. And hence OSS Network discovery systems first came into existence in order to help keep inventories up-to-date with the real world. Towards this intent, OSS Discovery systems needed to take care of the following requirements.

1) Data that needs to be acquired from the network

  • Ability to perform shallow discovery i.e. discovery of physical fabric (Physical inventory like Racks, Shelves, Cards, Ports) from most leading-class networking equipment vendors based on standards like SNMP Entity Mib-2, TMF 814 etc.
  • Ability to discover deep logical information like vlans, ip interfaces, ip subnets etc again using standard mibs (like RFC 2233, RFC 2674), MTNM etc.
  • Ability to discover inter-connectivity (PTOPO) between devices wherever possible

2) How data needs to be acquired from the network

  • A lot of device vendors equipment cannot be discovered using standard SNMP interfaces and hence the need to be able to discover using other network management protocols like TL1, MML, CMIP, Corba, WebServices.
  • Many a times Network operations teams are not comfortable with multiple IT systems (N/EMS, Discovery, Fulfillment, Assurance etc) connecting directly to the devices for multiple reasons like worried about resource consumption on devices which may affect revenue generating traffic and access control list and login configurations on all devices need to be modified for discovery servers to access the device. This may seem like a trivial one time activity, but a few large Tier 1 operators which have strong process in place; this activity can run into a few months.

And hence discovery systems were forced to interface with north bound interfaces of N/EMS systems or gateways in order to perform discovery. These NBI APIs on a lot of occasions end up being propriety or customizations done on standard specifications.

For reasons mentioned above, discovery systems with ability in capturing breadth and depth in discovery have had to build a lot of device vendor specific adaptors instead of standards based adaptors.

Get Started with Subex