The Verge interviews Google DeepMind CEO Demis Hassabis about Google’s AI strategy after merging DeepMind and Google Brain. Hassabis discusses the reasons for the merger, the company’s goals for AGI, and the competitive landscape with open-source models. He addresses concerns about AI risk, regulation, and the changing nature of labor in the AI industry. Hassabis envisions AI becoming a universal personal assistant and emphasizes the importance of balancing innovation with responsible development.
What are the key factors that led to the merger of Google Brain and DeepMind into Google DeepMind?
The merger of Google Brain and DeepMind into Google DeepMind was driven by the recognition that AI has entered a new era, transitioning from research to practical application in everyday life and solving real-world problems. The public’s enthusiastic response to large language models like ChatGPT highlighted the potential for AI to be useful and accessible to everyone. This shift necessitated a streamlining of Google’s AI efforts to focus on product development and improve user experiences. Combining the resources and expertise of Google Brain and DeepMind was seen as the best way to accelerate AI innovation and deliver next-generation AI-powered products. This also allows for a more coordinated approach to research, development, and deployment of AI technologies across Google’s various products and services, ensuring that AI advancements are integrated more effectively and efficiently.
What is Demis Hassabis’s vision for the future of AI and Google DeepMind’s role in it?
Demis Hassabis envisions a future where AI systems, particularly through Google DeepMind, evolve into incredible universal personal assistants that people use multiple times daily for various helpful tasks. These AI assistants will assist in everything from recommending books and live events to booking travel, planning trips, and aiding in everyday work. Hassabis believes that current chatbots are just the beginning and that future AI systems will incorporate planning, reasoning, and memory capabilities, making them significantly more useful and capable. Google DeepMind aims to lead this evolution by focusing on building AI-powered next-generation products that improve people’s lives and solve complex real-world problems. This includes advancing not only generative AI but also planning, deep reinforcement learning, problem-solving, and reasoning capabilities.
What are the potential risks associated with AI, and how is Google DeepMind addressing them?
Demis Hassabis acknowledges several potential risks associated with AI, including the spread of disinformation, the potential for misuse by bad actors, and the long-term risks associated with advanced AI systems approaching artificial general intelligence (AGI). Google DeepMind is addressing these risks through a combination of responsible development practices, research into safety and evaluation benchmarks, and engagement with stakeholders across society. The company is focused on improving the factuality and reliability of AI systems, developing methods for detecting and preventing the spread of disinformation, and promoting international cooperation on AI governance and regulation. Hassabis also emphasizes the importance of encrypted watermarking to identify AI-generated content and prevent deepfakes. Google DeepMind aims to balance the potential benefits of AI with the need to mitigate risks, ensuring that AI is developed and deployed in a way that benefits humanity.
Artículo Original: https://www.theverge.com/23778745/demis-hassabis-google-deepmind-ai-alphafold-risks
Advices:
- Focus on developing AI systems with planning, reasoning, and memory capabilities to move beyond current chatbot limitations.
- Prioritize research into robust evaluation benchmarks to rigorously test AI capabilities and ensure safety and reliability.
- Implement encrypted watermarking for AI-generated content to combat deepfakes and disinformation.
Deja una respuesta