Autonomous computing enables networks to manage themselves with little human intervention. IBM proposes a foundation of industry standards to create a multilevel autonomous computing system that performs critical administrative tasks without human intervention. The system must be able to take inventory, configure itself, optimize performance, self-heal, monitor security, recognize and adapt to coexisting systems, work with shared technologies, and achieve goals smoothly. The end-user goals are flexibility, affordability, and transparency. IBM’s plan for autonomous computing is more far-reaching than other companies and will be implemented in stages over several years.
Autonomous computing is the next generation of embedded computing technology that will enable networks to manage themselves with little or no human intervention. It gets its name from the human autonomic nervous system, which sends out impulses that control heart rate, breathing, and other functions without conscious thought or effort.
Paul Horn of IBM Research first suggested the idea of autonomous computing on October 15, 2001 at the Agenda conference in Arizona. The need centers on the exponential growth in networking complexity. Not only are there a vast array of desktop and mobile devices interconnecting and feeding into various types of networks using competing strategies, standards and interfaces; but businesses, institutions and even infrastructure rely more and more on these networks. However, there is a shortage of I/T professionals and it is virtually impossible for technicians to keep up with the constant onslaught of new devices, changing protocols, new online business solutions and mobile interfacing challenges. IBM and other tech giants expect this problem to get worse.
The solution, according to IBM, is to create a foundation of industry standards based on some common data management protocols. The “shared root hypothesis” would allow hardware and software from various manufacturers not only to work together, but also to support a multilevel autonomous computing system based on such assumptions. This would create an environment where the system could perform various critical administrative tasks without human intervention.
IBM sees eight basic criteria that define a pervasive autonomous computing system. In short, they are as follows:
The system must be able to take a continuous inventory of itself, its connections, devices and resources and know which ones to share or protect.
It must be able to dynamically configure and reconfigure itself as needed.
It must constantly look for ways to optimize performance.
It must perform self-healing by reallocating resources and reconfiguring itself to work around any dysfunctional elements.
It must be able to monitor security and protect itself from attacks.
It must be able to recognize and adapt to the needs of coexisting systems within its environment.
It must work with shared technologies. Proprietary solutions are not compatible with the ideology of autonomous computing.
It must achieve these goals smoothly without intervention.
While these are eight proposed ingredients for an autonomous computing system, IBM hopes they translate into three end-user goals: flexibility, affordability, and transparency. In short, the ability to seamlessly extract data from home, office or in the field, seamlessly and regardless of device, network or connectivity methodology.
Several universities and companies, such as Sun Microsystems and Hewlett Packard, are developing similar systems, but IBM says their plans for autonomous computing are more far-reaching. Because this plan is based on a cooperative evolution of hardware and software, autonomous computing is to be implemented in stages over several years.
Protect your devices with Threat Protection by NordVPN