Artificial Intelligence in a Strategic View

Artificial Intelligence in a Strategic View

No entries found.

As a combination of mathematical logic and computing, Artificial Intelligence (AI) is one of the newest fields of science and engineering. Since its origin in the 1950s, it has been of great academic interest. It is not only a technology with various innovative applications, but also a source of change for organizations and whole industries. The technology’s contributions are diverse; it not only enables automated and highly interconnected machines and robots, but also allows widely autonomous production processes; hence, significantly reducing time-to-market and production costs. In addition, its capability to analyze large, unstructured sets of data enables AI-powered companies to understand their customers better and in greater detail, which may, for instance, lead to increased sales.

Fundamentals of AI

There are different understandings of AI and, accordingly, various definitions of the term, though none that are generally accepted. Most definitions compare AI to human, or generally rational, actions. AI engineers and researchers endeavor to imitate, achieve, and build animal and human-like intelligence. Consequently, one of the first AI concepts was, unsurprisingly, not a formal definition, but a test to measure whether an artificial entity possessed AI. This Turing Test proposed that, by means of written responses, a computer could trick a human interrogator into believing that an artificial entity was also human.

The lack of a generally accepted definition is compensated by a suitable approach that distinguishes between weak and strong AI. Artifacts of weak AI aim to perform tasks in a very limited and predetermined field, such as visual perception, speech or visual pattern recognition, and probabilistic reasoning. Well known examples of these are, for instance, IBM Big Blue, the chess program which beat the human chess champion in 1997, as well as virtual assistants like Siri, Cortana, and OK Google, which have been installed on many modern smartphones. Strong AI, which goes far beyond these examples’ limitations, would be capable of performing any cognitive tasks at a human or super-human level.

A Short History of AI

Research on AI began in the 1950s, and gained speed in the early 1960s. Although certain technological breakthroughs were significant, setbacks mostly characterized the history of AI research. In at least two phases, called the “AI winters,” research efforts almost came to a full halt and ambitions faltered. Expectations of the field were very high from the beginning. Nevertheless, after about a decade of research, it became clear that the expectations of general purpose AI applications would not and could not be fulfilled due to, specifically, the relatively low computational power. This resulted in the first AI winter, which lasted from the early 1970s to the early 1980s and during which funding for and interest in the field almost dried up completely. The fundamental development at the beginning of the 1970s was a paradigm shift from general to domain-specific techniques. DENDRAL, a program with which chemists could identify and analyze unknown organic molecules, proved that, despite the low computational power and the initially unfulfilled expectations, narrow AI in particular could make significant practical contributions. These improvements increased interest in AI, which lasted for about a decade before AI research once again faced significant budget cuts. This shift in research interests was also due to overly optimistic and unfulfilled expectations. This second AI winter was, however, not as severe as the first, nor did it last as long. At the beginning of the 1990s, computational power increased significantly and Moore’s Law seemed to aid AI research considerably. Together with other inventions, like the Internet and the collection of data on a massive scale, IBM’s Watson ignited broad interest in what modern versions of AI could achieve when it beat the world’s best human Jeopardy players in 2011.

AI applications and new business models

AI can be divided into a multitude of sub fields, of which Machine Learning is one of the most prominent. Machine Learning is concerned with programming computers to optimize performance criteria by means of certain exemplary data. This can be achieved in different ways and with various outcomes, like the classification of objects, pattern recognition, outliner detection, and predicting future events. Online service providers, like Amazon, Google, Facebook, and Netflix, were specifically the first to start using machine learning applications, by offering their users personalized playlists and searches, or by producing better product, service, and contact recommendations.

Traditionally, AI is best in highly redundant, low complexity tasks. Recent research allows AI to learn, automate, and support more complex cognitive tasks. By taking over monotonous functions, or assisting humans with more advanced problems, AI can enable new opportunities and free humans to focus on tasks that require human interaction, creativity, and problem-solving skills. Consequently, the shift of responsibility from a human to a machine requires change in organizing work, human-machine operations and interactions, as well as in the process landscape. Furthermore, the opportunities that AI presents can enable entirely new business models and create new markets.

At the Chair of Information Systems and Strategic IT Management, our aim is to investigate the impacts that AI has on all organizations’ strategic aspects and to provide guidance on the change processes associated with developing, adopting, adapting, and using AI technology. Peder Bergan, for instance, researches how organizations can best use AI to increase organizational performance and how they can gain the necessary capabilities to take advantage of AI’s significant potential.

Duration

2016–present (ongoing)

Publications

-

Student contributions

Helge Schmermbeck, B.Sc.