May 19, 2022

Herald of Fashion

Complete US News World

Artificial intelligence is mastery of language. Should we trust what he says?

Artificial intelligence is mastery of language.  Should we trust what he says?

But as the fluency of GPT-3 has impressed many observers, the big language model approach has also attracted significant criticism over the past few years. Some skeptics argue that the software is only capable of blind imitation – that it imitates the grammatical patterns of human language but is unable to generate its own ideas or make complex decisions, a fundamental limitation that would prevent the LLM approach from maturing into anything resembling human intelligence. For these critics, GPT-3 is the latest brilliant object in a long history of AI hype, directing research money and attention to what will ultimately prove to be a dead end, preventing other promising approaches from maturing. Other critics believe programs like GPT-3 will forever be compromised by biases, propaganda, and misinformation in the data they have been trained on, meaning their use of anything more than salon tricks will always be irresponsible.

Wherever you get to this debate, the pace of recent improvement in large language models makes it hard to imagine that they will not be deployed commercially in the coming years. This raises the question of how the world should be unleashed – and, in this respect, the other massive advances of artificial intelligence. In the rise of Facebook and Google, we’ve seen how dominance in a new world of technology can quickly lead to an astonishing power over society, and artificial intelligence threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of this scale and ambition, with such promise and potential for abuse?

See also  FromSoftware has been removed from Steam

Or should we build it at all?

OpenAI Origins It dates back to July 2015, when a small group of tech stars gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the emblematic heart of Silicon Valley. The dinner was held amid two recent developments in the world of technology, one positive and one disturbing. On the other hand, radical advances in computational power—and some new breakthroughs in neural network design—have created a palpable sense of excitement in the field of machine learning. It was felt that the long “AI winter,” the decades in which the field failed to live up to the early hype, was finally beginning to thaw. A group at the University of Toronto trained a program called AlexNet to identify classes of objects in images (dogs, castles, tractors, tables) with a much higher level of accuracy than any neural network had previously achieved. Google quickly swooped in to hire AlexNet creators, while at the same time acquiring DeepMind and starting its own initiative called Google Brain. The mainstream adoption of smart assistants like Siri and Alexa has shown that even written agents can be great customers.

But over the same time period, there has been a seismic shift in public attitudes toward the big tech companies, with once-popular companies like Google or Facebook criticized for their near-monopolistic power, amplification of conspiracy theories and ruthless sucking of our attention toward algorithm feeds. Long-term concerns about the dangers of artificial intelligence have surfaced in the editorial pages and on the TED platform. Nick Bostrom of the University of Oxford has published his book Superintelligence, which presents a set of scenarios under which advanced AI might deviate from the interests of humanity, with potentially dire consequences. In late 2014, Stephen Hawking told the BBC That “the development of full artificial intelligence could spell the end of the human race.” It seemed as if the corporate consolidation cycle that marked the social media age was already happening with AI, but only this time, the algorithms might not just be sowing polarization or selling our attention to the highest bidder — they could end up destroying humanity itself. Once again, all the evidence indicated that this power would be controlled by a few giant corporations in Silicon Valley.

See also  Opinions of cats, dogs and abortion

The agenda for dinner on Sand Hill Road on a July night was nothing if not ambitious: figuring out how best to direct AI research toward the most positive outcomes possible, and avoiding the negative short-term consequences that have marred the web 2.0 era and long-term existential threats. From that dinner, a new idea began to take shape—one that quickly became a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who had recently left Stripe. Interestingly, the idea was not so much technical as it was organizational: if AI is to unleash the world in a safe and useful way, it will require innovation in terms of governance, incentives, and stakeholder participation. The technical path for what the field calls Artificial General Intelligence, or AGI, was not yet clear to the group. But the troubling predictions from Bostrom and Hawking convinced them that the realization of AI by AI would enhance an astonishing amount of power and moral burden in whoever eventually invented and controlled it.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman signed on as CEO of the company, with Brockmann overseeing the technology; Another dinner attendee, AlexNet co-founder Elijah Sotskever, has been hired by Google to be the head of research. (Elon Musk, who was also present at the dinner, joined the board, but left in 2018). Blog postBrockmann and Sotscover set out the scope of their ambition: “OpenAI is a non-profit artificial intelligence research company,” they wrote. They added: ”Our goal is to develop digital intelligence in a way that is most likely to benefit humanity as a whole, unconstrained by the need to generate a financial return. And in the spirit of freedom, as widely and equally as possible. ”

See also  Roland's new Aira Compact puts TR drums, 303 bass lines and Juno sounds in your pocket

OpenAI founders will release a file General Charter Three years later, articulating the basic principles behind the new organization. The document has been easily interpreted as an inaccurate research into Google’s “don’t be evil” mantra from its early days, an acknowledgment that maximizing the social benefits – and minimizing harm – of new technology wasn’t always a simple arithmetic. While Google and Facebook have both reached global dominance through closed source algorithms and proprietary networks, the founders of OpenAI promised to go the other way, freely sharing new research and code with the world.