The contract billing system of one of the CIS leading mobile operators at fixed telephony, home Internet, TV and mobile services, was built on a basis of BIS billing system. It is important to consider that the system has historically been implemented and migrated for various services into a single billing system over the years not solely by the supplier, but also by internal development teams, thereby acquiring many integrations with other IT-systems that were developed both independently and by various vendors. For example, payment gateways, DWH storage, HelpDesk, CRM and WorkForce systems, mediation, provisioning and notification systems, ERP system and financial blocks (tax accounting, penalties calculation, and debt cancellation).
According to the company’s plans, the BIS system was a subject of an operational replacement within 6 months in order to provide a new billing core for rapidly changing business needs. In the meantime, the total volume of the migration base counted approximately 5 million active subscribers with conditions of periodic quarterly tariff changes (repricing).
One of the most important requirements for the project was the duration of the launch of marketing activities, which should not exceed 1 week for simple activities and 3 weeks for complex activities accordingly. Another mandatory project requirement was full post-migration compatibility of the legacy systems that were already integrated with the BIS billing system, in order to minimize their refactoring, since some of them either could not be developed or there was no time to re-refactor them.
Since the BSS-system of the telecom operator is SOX-critical (i.e. subject to audit based on SOX controls), the project management faced the task of creating a software solution not only for this specific project, but that could also successfully solve digital transformation tasks that were standing in front of the company. The company’s workflow was built on a basis of a standard SDLC process.
I’d like to talk about how we have implemented this process using our project as an example. It involved participation of several development teams and how we used the Agile approach instead of Waterfall.
To form a plan, considering unrealistically tight project deadlines, the management team made the following decisions to successfully implement the project:
- Involve three different teams – Business Analytics team and both Internal and External Development teams;
- Organize requirements collecting and analysis according to the “Reverse Engineering” principle. This approach does not imply anything innovative, except for the implementation timing, which was crucial on this stage. Basically, we’ve used publicly available advertising product documentation, and made detailed integration point cutoffs. We’ve also created polls for actual users of both BIS-system and integrated systems. It is important to understand that this approach required from the NATEC R&D team not only confident skills in designing of BSS systems, but also understanding of the Peter-Service’s BIS-system – its general architecture and operating principles.
- Perform planning according to the “Critical Path” method – a method based on the allocation of critical tasks with a zero slack of execution time. The method itself is well suited for project management. The list and fulfillment of critical tasks, formed and filled according to the “The best is the enemy of good” principle is also important. In other words – despite all efforts, “the best” may not be achieved, but at the same time the already achieved “good” might be lost (according to the author of this aphorism, which sounds in French as “Le mieux est I’enneini du bien”). The company has been running various business transformation projects for several years, so the actual volume of services and final requirements for interacting with the BSS system were largely unknown. As a result, it was decided to focus on the rapidly adapting core of the BSS system (at the mediation level, subscription accounting, tariff configurations, financial accounting configuration, and service provisioning). The solution was to ensure rapid growth and success when launching changes to already existing or new services, and a standardized approach to scheduling the launch of activities based on domain-driven design to simplify development and service design. In fact, most of the planning phase time was spent on retrospective launching activities and analyzing the requirements of roadmap projects in order to highlight common features and patterns – both process and design. In addition, the project management separately identified processes that could not be planned and controlled within the framework of this project, and the result of our project depended directly on them – for us such an artifact was the protracted process of parallel implementation of a prepaid billing system from another company, interaction with which either was assumed, or partly required to be implemented on a functional level.
As a result, we can tell with confidence that the project became successful only due to the fact that the project management looked one step ahead and thought of a future role of design changes as of crucial for its implementation.
- Design and development should be organized on the basis of the Reference Model, taking into account the existing converged infrastructure of the enterprise. We won’t dwell on the description of the converged infrastructure of our project, since it is not the subject of the article, but we would like to talk about the Reference architecture, which makes it possible to increase the efficiency of modelling – more on this below in the article.
- Develop software based on “Agile approach” – here we needed to ensure the independence of internal workflows of each individual team in order to minimize delays in deliveries to various environments.
- Organize the process of Testing with the use of automation (Automated Testing) – for which we used integration scripts tests and the SpecFlow BDD framework for UI automation, due to its focus on behavior.
- Build product deployment processes based on DevOps practices, adapted to the requirements of the existing ITIL process. Among main tasks for the project team were Deployment Automation, Application Performance Monitoring, and Configuration Management – we tried to lay the solutions of all these tasks directly into the Reference Architecture, for which we had to build a hosting platform for IoC containers, highlighting the tasks separately versioning and configuration in accordance with the Customer’s approved processes for managing changes and releases in the HPSM system.
The concept of a Reference Architecture itself is defined by the ISO 15704 “Enterprise Modeling and Architecture” standard, which specifies that model-based enterprise architecture should support the idea of ”reusable reference models”. It is also indicated that reference models require adaptation to a specific enterprise, and if necessary, some specific models that describe a separate entity of a specific enterprise or its part can be applied.
It is important to understand that the commonly used term “IT-systems architecture” is crucial, but it still is just a part of the “Enterprise architecture” concept, which is a set of interconnected architectures that reflect the structure and processes of the enterprise. In other words, the enterprise architecture should:
- allow modelling the project life cycle from initial concept to functional design or specification, detailed design, implementation, exploiting, operation, and ending with decommissioning;
- encompass the people, processes and infrastructure involved in fulfillment, management, and enterprise business control.
Thus, from an application point of view and in the context of this article, architectures can be divided into system architectures and enterprise architectures. We deliberately did not give a coherent definition of architecture and its classification not to cause unnecessary discussion due to differences in the outdated version of the IEEE / ISO / IEC 42010-2011 “Systems and software engineering – Architecture description” standard.
Getting back to the Design and Development phase, since the company’s IT strategy was focused on the recommendations of the TM Forum, it was required to create an open Reference Architecture focused on the domain model of the BIS billing system, but adapted to the current needs of the company and enabling the effective digital transformation of products and services in the future. From a practical point of view, the Reference Architecture is based on a set of decision patterns and approaches, which makes it reusable for launching new services and services on an Enterprise-wide scale – in other words, “out-of-the-box”. From the project management’s side it was necessary to choose templates of approaches and solutions – on one hand, for implementation and development of a new BSS billing core, and on the other hand, for simple integration with a huge number of Legacy systems, outdated processes, and somewhere, infrastructures. It is important to note that the choice of approaches and decision templates was delegated to the project implementation team, which ultimately showed high efficiency of Agile project management methodology, without which the project would have been impossible to implement on time. The actual content of the Reference Architecture had to take into account the local specifics of the enterprise, therefore, it contained more IT components, but the project team also considered the requests and requirements of all the participants of business processes within the Customer’s company, including marketing departments, project management, financial department and IT Operations department – thereby creating the MEF.DEV Reference Architecture, uniting business processes, information flows, as well as supporting their organizational and staff structures, along with system architecture (application architecture, data architecture, network architecture, and platform architecture).
From a practical point of view, the MEF.DEV Reference Architecture workflow includes 4 mandatory stages:
- First of all, it is required to run a decomposition based on the DDD approach in order to isolate and standardize services or products of a specific subject area (domain).
- Based on decomposition, we should develop models of entities and actions over them. They include common types and behavior in relation to all services/products for further description. We create specific resources, interactions, and models for each service/product. Some of these models refer to multiple entities (for example, there is a significant overlap in Customer models and their Associations). As a result, it is required to form a unified information SID model (Shared Information and Data) for the domain. In our case the telecom services were the domain. The model’s idea is not new for telecom providers. It is even a mandatory part of the NGOSS concept (ITUT M.3050 recommendations of the TM Forum organization) and serves as the basis for the integration of OSS / BSS class systems.
- Next, the common stage of forming the Business Requirements (BR) and specific Use Cases for a service/product begins. Here we need to define functional requirements, use cases, business processes mapping, etc., related to specific entities (for example, Activation action for FTTB services or PSTN service connection).
- After that, an iterative development process begins, the result of which is IoC containers (a Nuget package with control inversion) – this is necessary in order to compare entity and action models with native data schemes, that are available for use through Dependency Injection, and automatically generate the implementation of REST interfaces and documentation for developers.
It is also worth mentioning the important role of the SID model. It improves the business analysis process and makes it possible to increase the efficiency of interaction with developers in the SDLC process, as it allows to identify and standardize processes in relation to products and services (Actions is the atomic part of the process flow). A part of the WideCoup.Entities model for the Telecom domain on our project is shown at the picture below:
According to MEF.DEV Reference Architecture, the development process consisted of the following stages:
- Create a new solution package or auto-generate it using the Database-First or Model-First approach basis.
- Run the initial test of the solution package withing local environment.
- Register the first version of the package in the sandbox on the platform (deploy).
- Create a configuration for a package on the MEF.DEV platform.
- Publish a sandboxed version of a package on the MEF.DEV platform (every time a new version of a package is published, the platform automatically builds the package and deploys it for future use “on the go”). This process also generates an actual specification of a specific version of the package (CI/CD is optional).
- To solve integration testing tasks, the package is published in the E2E environment (CI/CD is optional), with a separate configuration. This process also generates the actual technical specification.
- In order to gain capacity, the IT Ops publishes the package in the production environment (CI/CD optional).
- During exploitation, IT Ops manages versions and configurations of previously published packages based on performance information of a particular version.
It’s worth mentioning that deployment automation was achieved by writing loosely-coupled application code based on IoC containers – thus, Nuget packages of different development teams were collected on the go, independently (of course, with their backward compatibility condition), providing the ability to switch versions, including an option to roll back to the previous version of the application.
The solution to the Application Performance Monitoring problem was based on the use of logging at the level of the IoC containers hosting platform, which made it possible to track changes in performance of particular versions.
When solving Configuration Management task, the Customer’s configuration management process required to add support for roles and various types of configurations for hosting the IoC containers, such as developer, administrator, and user configurations – thereby excluding environment- or right-dependent configurations from the delivery.
Moving on to the issue of scaling and implementing the platform, we decided to support horizontal scaling through the Stateless implementation of each node and the use of Distributed Cache based on the MS SQL Always ON cluster.
It took us 8 months to implement the project according to the current plan. This time frame also includes 2 months which we had to add to agree on contract issues. During the project implementation, a number of essential requirements were expected to be revised and successfully implemented, in particular, online integration with high-load interfaces of the payment gateway, and a huge number of UI changes on feedback from users. Nevertheless, these changes did not affect the launch timing of the project, primarily due to the readiness of the project itself for changes. The target system architecture is shown in the figure below: