Google’s Gemini is a family of large-scale, multimodal, and general AI models that can perform various tasks across different types of information, such as text, code, audio, image, and video. Gemini is based on the same research and technology used to create Gemma, a smaller and more accessible version of Gemini that is open to developers and researchers.
Google claims that Gemini and Gemma are designed with its AI Principles in mind, and that they have undergone rigorous safety evaluations to ensure their fairness and responsible use. However, some critics have raised concerns about the potential risks and challenges of deploying such powerful and general AI models, especially in terms of data quality, privacy, security, accountability, and especially a contrary focus on social impact.
One of the main problems that Google faced with Gemini was a seemingly self-imposed data poisoning attack that compromised the integrity and accuracy of its models. Data poisoning is a form of adversarial attack that tries to manipulate the training data of a machine learning model in order to influence its prediction behavior or cause it to produce faulty outputs.
Raghavan said Google didn’t intend for Gemini to refuse to create images of any particular group or to generate historically inaccurate photos. He also reiterated Google’s promise to improve Gemini’s image-generation abilities. - The Morning After: Why Google's Gemini image generation feature overcorrected for diversity
What many don’t understand is that while the technology behind Generative AI is groundbreaking in how it generates unique responses, the responses are based on the data being used to supply the content of the responses. In the case of Gemini, someone within the development process would have essentially injected malicious data into its online learning process, which caused it to generate harmful or misleading content.
Even Gemini’s own response doesn’t admit to a problem, just that it is offline for improvements. Neither does it apologize for any offending images prior to being taken offline.
This data poisoning attack exposed the vulnerability of Gemini and its lack of robustness and resilience against adversarial manipulation. It also raised questions about Google’s Responsible AI policy and whether it was sufficient and effective enough to prevent or mitigate such incidents, or that Google was even applying its own RAI policies.
Which all raises the question of how Google can resolve an issue like this when it seems that activist employees possibly went against Responsible AI policies without a mass firing - unless Google’s RAI policies are just window dressing.
Some experts argued that Google should have implemented more safeguards and checks to ensure the quality and validity of the data used to train and update Gemini, as well as to monitor and audit its outputs and impacts. Others suggested that Google should have been more transparent and accountable about the development and deployment of Gemini, and that it should have engaged more with the wider AI community and society to address the ethical and social implications of its models.
But even today, while Google has addressed some concerns, including taking image creation offline while it architects a fix, the text-based Gemini service still continues to lecture in its responses instead of just answering questions accurately and factually.
Responsible AI is a work in progress and there’s plenty of room for each Chatbot developer to assume its own meanings of exactly what is a responsible version. Over time, many have become desensitized to Google’s power of distributing ads based on internet searches and cell phone conversations. Google may decide to just wait it out with Gemini, hoping the distrust in the Chatbot will smooth over so the company can continue to distribute its own social and political leanings.
The bigger issue is that Google is part of the consortium helping develop the U.S. government’s RAI policies.
Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety
Learn about Data Poisoning
Learn about Data Poisoning and many other AI-specific attacks.
Want to discuss this further? Hit me up on Twitter or LinkedIn]
[Subscribe to the RSS feed for this blog]
[Subscribe to the Weekly Microsoft Sentinel Newsletter]
[Subscribe to the Weekly Microsoft Defender Newsletter]
[Subscribe to the Weekly Azure OpenAI Newsletter]
[Learn KQL with the Must Learn KQL series and book]
[Learn AI Security with the Must Learn AI Security series and book]