18-02-2020 6:06 am Published by Nederland.ai Leave your thoughts

Every year the OpenAI employees vote when they believe that artificial general intelligence, or AGI, will finally arrive. It is usually seen as a fun way to commit, and their estimates vary widely. But in an area that is still debating whether even human-like autonomous systems are possible, half of the labs bet it will probably happen within 15 years.

In the four short years of its existence, OpenAI has become one of the world's leading AI research laboratories. It has made a name for producing consistent headline grabbing research, alongside other AI heavyweights such as Alphabet's DeepMind. It is also a favorite in Silicon Valley, with Elon Musk and the legendary investor Sam Altman as founders.

Above all, he is ionized for his mission. The goal is to be the first to create AGI – a machine with the learning and thinking power of a human mind. The goal is not to dominate the world, but to ensure that technology is developed safely and its benefits are evenly distributed throughout the world.

The implication is that AGI can easily make amok if the development of technology is left to the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can commit great abuse and deception; and the costs of developing and implementing them tend to concentrate their power in the hands of a few people. Through extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

OpenAI wants to be that shepherd, and it has carefully adjusted its image. In a field dominated by rich companies, it was established as a non-profit organization. His first announcement said that this distinction would make it possible to “build value for everyone rather than for the shareholders.” The charter – a document so sacred that employees' wages are linked to how well they adhere to it – states that “the primary fiduciary duty of OpenAI is on humanity.” Reaching AGI safely is so important, it goes on, that if another organization came close to achieving first place, OpenAI would stop competing and work together instead. This seductive story plays well with investors and the media, and Microsoft injected the lab with a fresh $ 1 billion in July.

But three days at the OpenAI office – and nearly three dozen interviews with past and current employees, employees, friends and other experts in the field – provide a different picture. There is a misunderstanding between what the company embraces in public and how it operates behind closed doors. Over time, the company has been fighting fierce competition and the pressure to make more and more financial resources available, which has eroded the ideals underlying the company, namely transparency, openness and cooperation. Many who worked or worked for the company insisted on anonymity because they were not authorized to speak or were afraid of retaliation. Their reports suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining confidentiality, protecting its image and maintaining the loyalty of its employees.

Since its earliest conception, AI has strived as a field to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the famous English mathematician and computer scientist, started the now famous provocation “Can machines think?” Six years later, fascinated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

“It's one of the most fundamental questions in intellectual history, right?” Says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a non-profit AI research laboratory in Seattle. “It's just like, do we understand the origin of the universe? Do we understand matter?”

The problem is, AGI has always remained vague. Nobody can really describe what it might look like or what it should do at least. For example, it is not clear that there is only one kind of general intelligence; human intelligence could only be a subset. There are also different opinions about what purpose the AGI could serve. In the more romanticized vision, machine intelligence that is not hampered by the need for sleep or the inefficiency of human communication could help solve complex challenges such as climate change, poverty and hunger.

But the resounding consensus within the field is that such advanced capabilities would last for decades, even centuries, if it is possible to develop them at all. Many are also afraid that the pursuit of this goal can have an adverse effect. In the 1970s and again in the late 1980s and early 1990s, the field was promised too much and not delivered enough. During the night the funding dried up and left deep scars in a whole generation of researchers. “The field felt like a backwater,” said Peter Eckersley, the research director of the Partnership on AI industry group, of which OpenAI is a member.

Against this background, OpenAI entered the world with a splash on December 11, 2015. It was not the first to openly state that it was pursuing AGI; DeepMind had done that five years earlier and was taken over by Google in 2014. But OpenAI seemed different. First, the sticker price was shocking: the company would start with $ 1 billion from private investors, including Musk, Altman and PayPal co-founder Peter Thiel.

The star-studded investor list caused media madness, as did the impressive list of original employees: Greg Brockman, who had initiated technology for the payment company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, recently graduated from top universities or picked from other companies, would put together the core technical team. (Last February, Musk announced his retirement from the company due to disagreements about the company's direction. A month later, Altman stepped down as chairman of startup accelerator Y Combinator to become OpenAI's CEO).

But more than anything, OpenAI's non-profit status made a statement. “It will be important to have a leading research institution that can prioritize good results for all its self-interest,” the announcement said. “Researchers will be strongly encouraged to publish their work, whether it be papers, blog posts or code, and our patents (if any) will be shared with the world.” Although it never made the criticism explicit, the implication was clear: other labs, such as DeepMind, could not serve humanity because they were limited by commercial interests. While they were closed, OpenAI would be open.

In a research landscape that was increasingly privatized and focused on short-term financial gain, OpenAI offered a new way to finance progress in the biggest problems. “It was a beacon of hope,” said Chip Huyen, a machine learning expert who has closely followed the lab's journey.

At the intersection of the 18th and Folsom Streets in San Francisco, the OpenAI office looks like a mysterious warehouse. The historic building has gray paneling and tinted windows, with most shades pulled down. The letters “PIONEER BUILDING” – the remains of its former owner, the Pioneer Truck Factory – are around the corner in faded red paint.

Inside, the space is light and airy. The first floor has a few common areas and two meeting rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more a glorified telephone booth, is called Infinite Jest. This is the space that I limit myself to during my visit. It is forbidden to visit the second and third floors, where everyone has their desks, different robots and just about anything that is interesting. When it's time for their interviews, people come to me. An employee trains a watchful eye on me between meetings.

On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. “We've never given anyone so much access,” he says with a cautious smile. He wears casual clothes and sports, like many at OpenAI, a formless hairstyle that seems to indicate an efficient, no-frills mentality.

Brockman, 31, grew up in a farm in North Dakota and had what he describes as a “concentrated, peaceful childhood.” He milked cows, collected eggs and fell in love with math while studying alone. He went to Harvard in 2008 with the intention of becoming a double in mathematics and computer science, but soon became restless about entering the real world. He stopped a year later, went back to study at MIT a year later, and dropped out again within a few months. The second time his decision was final. Once he moved to San Francisco, he never looked back.

Brockman takes me to lunch to get me from the office during a business meeting. In the cafe across the street, he speaks about OpenAI with intensity, sincerity and wonder, often drawing parallels between the mission and pioneering achievements of the history of science. It is easy to appreciate his charisma as a leader. With memorable passages from the books he has read, he levels himself on the favorite story of the Valley, America's race to the moon. “A story I really love is the caretaker's story,” he says, referring to a famous but probably apocryphal story. “Kennedy goes to him and asks him,” What are you doing? ? “and he says, ” Oh, I am helping to put a man on the moon! “) There is also the transcontinental railroad (” It was actually the last mega-project that was carried out entirely by hand … a project of immense size that was completely risky “and Thomas Edison light bulb (” A committee of leading experts said, “It will never work,” and a year later he shipped “).

Brockman is aware of the gamble that OpenAI has taken and is aware that it evokes cynicism and research. But with every reference his message is clear: people can be skeptical about what they want. It is the price of daring.

Those who initially joined OpenAI remember the energy, the excitement, and the sense of purpose. The team was petty because of a tight web of connections – and management remained loosely and informal. Everyone believed in a flat structure where everyone's ideas and debate would be welcome.

Musk played a significant role in building a collective mythology. “Look, I get it. The AGI may be far away, but what if it isn't?” Recalls Pieter Abbeel, a UC Berkeley professor who worked there with some of his students for the first two years . “What if it is even a 1% or 0.1% chance that it will happen in the next five to ten years? Should we not think about it properly? That resonated with me,” he says.

But the informality also led to some vagueness in the direction. In May 2016, Altman and Brockman were visited by Dario Amodei, then a Google researcher, who told them that nobody understood what they were doing. In a report published in the New Yorker, it was also unclear whether the team knew it themselves. “Our goal right now is to get the best there is,” Brockman said. “It's a bit vague. ”

Amodei nevertheless joined the team a few months later. His sister, Daniela Amodei, had worked with Brockman before and he already knew a lot of OpenAI's members. After two years, at the request of Brockman, Daniela also joined. “Imagine, we didn't start anything,” says Brockman. “We just had the ideal that we wanted the AGI to go well.”

Leave a Reply

Your email address will not be published. Required fields are marked *

12 − eleven =

The maximum upload file size: 20 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here