December , 2023
Happenings in OpenAI reflects a wider divide in Silicon Valley
15:53 pm

Buroshiva Dasgupta

A leaked document of OpenAl suggests that the reason of dismissal of Sam Altman was that he had reached a stage in the discovery of ‘artificial intelligence’ which could threaten humanity.

Experts in Artificial Intelligence (AI) suggest that knowledge in AI is envisaged as having  ten stages  of evolution which stretch from the low rule-based or context based narrow domains (like alarms, siri/alexa, or  translator facilities) to cosmic AI or God-like AI, where   the machine becomes all knowing or all powerful. The present discovery of ChatGPT which has created so much of a sensation is the 4th stage of evolution and to this far, the knowledge in AI remains in the realm of reality. The rest of the stages of AI evolution are still supposed to be in the “theoretical” world.

Observers say that Sam Altman, who was pushed out of Open AI and then re-inducted by a new board in the recent corporate drama in the IT world, had reached the fifth stage of AI evolution which is called super-intelligence. It is sometimes termed as “Qualia” or Artificial General Intelligence (AGI) which is meta-cognition where the computer supersedes human intelligence and enters into cross-domain learning.

Among the threats of this super-intelligent, AGI “Q star” computer, first is the breaking of the ‘AES-192’ which is a security code used by governments, banks and other organisations to protect sensitive information. With Qualia capabilities, someone can lead to massive privacy violations and security breaches. Countries or organisations with access to such a tool can unfairly dominate or surveil others and create an imbalance of power.

The return of Altman also suggests several changes in the IT industry. Microsoft which stood by Altman and almost forced a change in the OpenAI Board of Directors to induct him back in the company will, for some time at least, rule the new discoveries in the AI sphere, OpenAI being the lead organisation in this matter. In all AI companies round the world, the focus will shift from the academic idealism to more commercialism. OpenAI, which was originally structured as a for-profit organisation within a non-profit entity will radically change its focus and serve the industry (read Microsoft). Ilya Sutskever, the other director in OpenAI is a Harvard graduate; while Sam Altman and Greg Brockman, who in support of Altman resigned and is now back in the company, are Harvard drop-outs. It is said, Sam and Ilya has a personality clash, now described as clash between the academia and the industry.

Steve Job was also a Stanford drop out; he too was initially pushed out of Apple and later re-inducted and who subsequently made history. The tussle between academia and industry is universal. The events at OpenAI are also a dramatic manifestation of a wider divide in Silicon Valley. On one side are the “doomers”, who believe that, left unchecked, AI poses an existential risk to humanity and hence advocate stricter regulations. Opposing them are “boomers”, who play down fears of a ‘darker’ AI revolution and stress its potential to enhance progress. 

The split partly reflects a philosophical difference. Many in the doomer camp are influenced by “effective altruism”, a movement worried that AI might wipe out humanity. Boomers believe in a counter worldview called “effective accelerationism”, which stresses that the development of AI should be speeded up.

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.