Research & Innovation

Programming a Paradigm Shift: Software 1.0 to Software 2.0

 
August 2, 2019

We all remember the advent of computers – writing codes that led to the automation of numerous business processes. Business applications were – in fact, still are – written using procedural and functional languages such as C, C++, Java, Scala, Python, etc. They gave instructions to the computer to automate manual tasks such as inventory management, payroll processes and the like.

Over time, this automation process came to the assistance of various categories of business executives. Developers followed the traditional waterfall model to build these applications or write/test source code in programming languages in order to meet business process requirements. This was the Software 1.0 phase, where humans gave commands to machines.

From about a decade back, however, enterprises started building applications by automatically generating the application code from business requirements. With advancements in deep learning (DL), we can now build neural networks that in turn automate instructions and write codes. The programming of a function using this neural network model is referred to as Software 2.0.

Does this mean that software programmers have ceased to exist? Does this mean we will gather data, feed it into machines and wait for an application to churn out programs?

Let’s compare the two programming techniques to get a better understanding of the scenario.

Software 1.0 Vs Software 2.0

With data – along with high-performance computing – taking center stage today, developers have been motivated to build analytical models to assist business managers in their decision-making process. Continuous research in artificial intelligence (AI) technology, including DL neural networks and the like, has given birth to systems such as AlphaGo, which have completely surpassed human capabilities. This has prompted business managers to shift from using Software 1.0-based decision-making systems such as data warehouses to Software 2.0-based neural network model-based decision systems.

The building of such effective models requires collaboration between domain experts and data scientists. This means working with lots of data, choosing the correct algorithms and selecting clean labelled data and hyper parameters for tuning, etc. The success of these trained models in doing predictive tasks led to efforts in the direction of building frameworks and/or systems for democratizing AI. With these systems, any non-expert or user can build analytical models for a specific use case.  

The availability of democratized AI frameworks, strong research in DL algorithms and the success of DL models in doing their task has motivated users to develop traditional functions using these models. For example, a database index can be built dynamically by learning the data access pattern, unlike a traditional system, where a user has to build an index statically by looking at the business use case.

With a model-based approach, a database system can learn the business use case and decide to create relevant indexes automatically. Also, the traditional query optimizer in databases may be replaced by a model-based query optimizer, which can learn and customize query optimization for each use case. Similarly, the traditional approach to building models requires users to label the data for training, while approaches like Snorkel, based on a new data programming paradigm, can build a model using business use case domain functions, which can then be used to label the training data. 

Software 1.0 delivery deploys the complete application over a number of iterations by merging incremental application pieces with the existing deployment. On the other hand, Software 2.0-based applications can adapt their logic dynamically based on the data generated by them, and carry out model-based deployment. We can, in fact, foresee application developers building Software 2.0 applications by just providing domain functions, input features and initial weights to the neural networks.

With Software 2.0, we envision an agile, more collaborative developer environment, leading to more efficient and effective solutions. 

Rekha Singhal is a senior research scientist with TCS Research and Innovation.  She has worked in both corporate and academic research. Her research interests are in the areas of high performance data analytics systems, heterogeneous architecture, performance modelling of big data systems, query optimization, and storage area networks. Rekha headed the development of the disaster recovery product Revival 1000, which was launched at the Centre for Development of Advanced Computing. She received her MTech and PhD (computer science) from IIT Delhi, India.