New Bill Aims to Unmask AI’s Use of Copyrighted Creativity”
AI News-On Tuesday, US legislation aimed at regulating AI companies’ use of copyrighted content was introduced, signaling a pivotal moment for the intersection of artificial intelligence and copyright law. Spearheaded by California Congressman Adam Schiff, the bill demands AI firms disclose copyrighted materials in their training datasets to the Copyright Office prior to launching new AI systems. With a 30-day notice requirement before AI tool releases and potential financial penalties for non-compliance, this bill targets the vast datasets—billions of text lines, images, and millions of music and movie hours—used to train generative AI technologies.
Amidst growing concerns over whether AI corporations are leveraging copyrighted materials without authorization, Schiff’s initiative seeks to install a framework for ethical AI development, emphasizing the balance between AI’s transformative potential and the necessity for creator rights protection. The bill has garnered support from various entertainment entities and unions, including the Recording Industry Association of America and SAG-AFTRA, highlighting the widespread industry call for safeguarding human creative outputs against unauthorized AI replication.
OpenAI, a front-runner in AI innovation, currently faces multiple copyright infringement lawsuits, challenging the legal boundaries of “fair use” in the context of AI’s reliance on copyrighted works for training. This unfolding legal battle underscores the critical debate over copyright law’s applicability to AI training practices and the potential implications for both creators’ livelihoods and the viability of AI technologies.
As generative AI advances, the entertainment sector’s pushback is intensifying, with over 200 prominent musicians recently advocating for stronger protections against AI’s encroachment on artistic integrity. This legislative move represents a significant step towards addressing the complex issues at the heart of AI’s rapid evolution and its impact on creative industries. SOURCE
Gemini 1.5 Pro: Global Launch with Audio Magic & Smart Features!
AI News-Just under two months ago, we introduced the innovative Gemini 1.5 Pro model on Google AI Studio, and the response from developers has been incredible. They’ve utilized its expansive 1 million context window for debugging, creating, and learning in ways that have truly impressed us.
Now, we’re excited to announce that Gemini 1.5 Pro is rolling out globally in over 180 countries through the Gemini API in a public preview. This iteration boasts pioneering native audio understanding and introduces an easy-to-use File API for efficient file management. Additionally, we’ve integrated new functionalities like system instructions and JSON mode, offering developers enhanced control over outputs. Plus, our latest text embedding model sets a new standard in performance. Head over to Google AI Studio, secure your API key, and dive into the world of possibilities with Gemini 1.5 Pro.
Unlock new use cases with audio and video modalities
AI News-Gemini 1.5 Pro is now enhancing its capabilities to encompass audio (speech) understanding, available through both the Gemini API and Google AI Studio. Furthermore, this advanced model now supports integrated reasoning across image (frames) and audio (speech) for video content within Google AI Studio. API support for these video functionalities is on the horizon, promising even more versatility for Gemini 1.5 Pro users.
Gemini API Improvements
AI News-Today, we’re addressing a number of top developer requests:
Introducing system instructions: Now in Google AI Studio and the Gemini API, you can direct the model’s responses using system instructions. Tailor the model to your needs by specifying roles, formats, objectives, and guidelines, ensuring it behaves in alignment with your unique requirements.
JSON mode feature: Command the model to produce outputs exclusively as JSON objects, facilitating the extraction of structured data from text or visuals. Begin utilizing this feature with the URL, and look forward to upcoming support for the Python SDK.
Enhanced function calling options: Enhance the model’s reliability by selecting specific output modes, including text, function call, or the function alone, to better control and refine the model’s responses.
A new embedding model with improved performance
AI News-Starting today, the Gemini API is offering developers access to our advanced text embedding model, text-embedding-004 (known as text-embedding-preview-0409 in Vertex AI). This model sets a new standard in retrieval performance, surpassing other models of similar dimensions in the MTEB benchmarks.
This update is just the beginning of several enhancements planned for both the Gemini API and Google AI Studio in the upcoming weeks. Our goal is to make Google AI Studio and the Gemini API the most user-friendly platforms for working with Gemini technology. Dive into Google AI Studio now with Gemini 1.5 Pro, discover coding guides and quick starts in our newly introduced Gemini API Cookbook, and connect with our developer community on Discord. SOURCE
Llama 3 LLM: Meta’s Next Big Leap, Launching Soon!
AI News-Meta is set to unveil Llama 3, its latest large language model, within the next month, following reports by The Information about its imminent release. Nick Clegg, Meta’s president of global affairs, announced at a London event that the launch of this next-gen foundation model is just around the corner, with plans to introduce various versions offering diverse capabilities throughout the year.
Chris Cox, Meta’s Chief Product Officer, highlighted that Llama 3 aims to enhance a wide array of Meta’s products, marking a significant step in the company’s efforts to leverage generative AI technology. This move is part of Meta’s strategy to compete with Open AI’s Chat-GPT, which has significantly impacted the tech industry by popularizing AI-driven interactions.
Despite adopting a cautious stance in AI development, Meta is addressing feedback on the limitations of earlier Llama versions. Llama 3 promises not only to improve question-answering precision but also to expand the range of queries it can handle, including potentially sensitive topics. This enhancement is anticipated to boost user engagement with the product.
Joelle Pineau, Meta’s Vice President of AI Research, ambitiously envisions Meta’s Llama-powered AI becoming the world’s leading assistant. However, she acknowledges the significant journey ahead to achieve this. Meta has been discreet about Llama 3’s specifics, including its parameter size, though it’s rumored to boast around 140 billion parameters—doubling that of its predecessor, Llama 2.
Meta’s commitment to developing Llama as an open-source venture underscores a strategic departure from the proprietary nature of some AI models, aiming to garner developer support and foster a more collaborative AI ecosystem. Yet, the company treads carefully, particularly with innovations beyond text generation like Emu, its image generation tool, which remains under wraps for now.
Chris Cox emphasized the importance of latency, safety, and user-friendliness in generating images that meet creative needs. Meanwhile, Meta faces internal scepticism about generative AI’s future. Yann LeCun, Meta’s chief AI scientist and a notable figure in AI academia, critiques generative AI’s limitations, advocating for the development of Joint Embedding Predicting Architecture (JEPA). He believes JEPA, already in use for enhancing image prediction accuracy at Meta, represents the next leap in AI technology, suggesting a potential rebranding of Cox’s product division to reflect this shift. SOURCE
Conclusion:
This week AI News- in tech witnessed pivotal movements across the AI landscape. From legislative efforts by Congressman Adam Schiff to ensure ethical AI development by mandating copyright disclosures, to the global launch of the Gemini 1.5 Pro model, which enhances AI capabilities with native audio understanding and improved functionalities, Meanwhile, Meta gears up to release Llama 3, its latest large language model, promising advancements in AI-driven interactions and a strategic shift towards open-source development. These developments underscore the tech industry’s dynamic evolution, spotlighting the balance between innovation and ethical considerations, the expansion of AI’s capabilities, and the ongoing quest for the most effective AI models. As technology continues to push boundaries, the focus remains on harnessing AI’s potential responsibly and collaboratively.
Stay connected with Arcot Group for more AI News into how such collaborations are reshaping the tech landscape and paving the way for future innovations. For further reading on similar breakthroughs and the impact of AI and robotics, explore our blog.