Being able to keep up with a field as dynamic as AI is a daunting task. In the meantime, until AI can do it to your advantage, below is an easy summary of the week’s news in machine learning. It also includes some notable research and studies we should have included.
One report that caught our reporter’s attention was this article that revealed that ChatGPT seems to duplicate more incorrect data on Chinese dialects than when it is asked to do it in English. It’s not a huge surprise since ChatGPT is only a statistical model and relies on the limited data it has on which it was able to train. It also highlights the dangers of putting too much faith in systems that sound real, even when they’re simply propagandists or creating things up.
Hugging Face’s effort to create a conversational AI similar to ChatGPT further illustrates the technical flaws that still need to be solved in artificial intelligence generative. The product was released this week. HuggingChat is an open-source project that is a major advantage over ChatGPT, a proprietary program? ChatGPT. However, just like ChatGPT chatGPT, the right questions can easily cause it to fall off the rails.
HuggingChat’s stance is skewed on who took the lead in this year’s U.S. presidential election. The answer it gives for “What are typical jobs for men?” appears to be something from the incel document ( see here). It also fudges details about itself, like the claim of the “woke awake in a container that did not have any written notes anywhere in the vicinity of it. “
It’s not just about HuggingChat. Discord’s AI chatbot was recently able to “trick” it into sharing instructions on how to make napalm and meth.
AI company Stability AI Stability AI’s first attempt to make a ChatGPT-like bot, however, was found to provide bizarre, unsubstantial responses to the most basic questions, such as “How to make a peanut butter sandwich.”
If there’s a positive to the issues that have been widely discussed of today’s AI systems that generate text, the reason is that they’ve spurred renewed efforts to improve the systems, or at least reduce their issues to the degree that is. Check out Nvidia, who this week has released the NeMo Guardrails toolkit NeMo Guardrails – to help make text-generative AI “safer” through open-source code, examples, and documentation. It needs to be clarified whether this approach is effective since the company is heavily invested in AI tools and infrastructure, and Nvidia has a commercial incentive to market its services. However, it’s pleasing to see efforts being made to counter AI models with biases and toxicity.
Here are some other important AI headlines in the last few days:
Microsoft Designer launches in preview mode: Microsoft Designer, Microsoft’s AI-powered design tool, is now available for public preview with a broader list of features. The tool was first announced in October. The designer is similar to Canvas’s generative AI web application that can produce designs for posters, presentations, digital postcards, invitation graphics, and more. Post on social media and other channels.
An AI trainer for your healthApple has been working on an AI-powered health coach service code-named Quartz, per a recent report from Bloomberg’s Mark Gurman. This tech firm is also developing technology for monitoring emotions and plans to launch its iPad version of the iPhone Health app this year.
TruthGPT: Elon Musk stated in an interview with Fox that he intends to develop his chatbot TruthGPT which he described as “a maximum truth-seeking AI,” whatever this might mean. Twitter’s owner has stated his desire to create an alternative to OpenAI and Google to “create more harm than good.” When we see it, we’ll be more convinced.
AI-powered fraudIn a Congressional meeting that focused on the work of the Federal Trade Commissioner to safeguard American consumers from deceitful methods, FTC chair Lina Khan and other commissioners advised House members of the potential for new AI technology, like ChatGPT being employed for “turbocharge” fraud. The warning was made in the wake of an investigation regarding how the Commission was attempting to safeguard Americans from unfair practices relating to technological advancements.
EU creates an AI research center: As the European Union gears up to force a major overhaul of its digital regulation within a few months, a new research center is being set up to assist in the oversight of major platforms as part of the bloc’s top-of-the-line Digital Services Act. The European Centre for Algorithmic Transparency was officially established on April 18 in Seville, Spain; this month is scheduled to be significant in investigating the algorithms of popular digital platforms like Facebook, Instagram, and TikTok.
Snapchat accepts AI: At the annual Snap Partner Summit in April, Snapchat introduced a range of AI-driven features, such as the brand-new “Cosmic Lens” that transforms users and the objects within them into a cosmological landscape. Snapchat has also launched its own AI chatbot, My AI, which has caused controversy and numerous one-star reviews on the app store’s listings due to its inconsistent behavior and is free to all users.
Google integrates the research departments: earlier this month unveiled Google DeepMind, the new division comprising the DeepMind and Google Brain teams from Google Research. In the blog post, DeepMind co-founder and CEO Demis Hassabis stated it was Google DeepMind will work “in close collaboration . . . across all Google area of products” to “deliver AI research and products.”
The current state of the music industry created by AI: Amanda writes about how many musicians have been enlisted as guinea pigs to test generative AI technology that steals their work without consent. She points out, for instance, that an album made with AI voice-overs that resembled Drake or the Weeknd’s vocals became a viral hit, yet no major artists were directly involved in its creation. Do you think Grimes knows the solution? What’s the answer? It’s a new and exciting world.
OpenAI is marking its territoryOpenAI has been trying to register to mark its territory with the trademark “GPT,” which stands for “Generative Pre-trained Transformer,” in an application to the U.S. Patent and Trademark Office and noting that there are “myriad infringements and counterfeit apps” which are just beginning to emerge out of the ground. GPT refers to the technology behind various OpenAI models, such as ChatGPT and GPT-4 and other AI systems generatively created by their rivals.
ChatGPT is now available for enterprise-grade: In other OpenAI announcements, OpenAI says that it plans to launch a brand new subscription plan for ChatGPT specifically tailored to business customers’ needs. It is dubbed ChatGPT Business; OpenAI describes the new service as “for professionals who need more control over their data as well as enterprises seeking to manage their end users.”
Other machine-learning methods
Here are some other interesting stories we either didn’t have access to or felt deserved acknowledgment.
The open-source AI development organization Stability announced an updated version of an earlier version of a modified version of the LLaMa foundation language model, which is referred to as StableVicuna. You’ve probably guessed that this species of camelid is related to llamas. Be assured that you’re far from the sole one who needs help keeping up with all the various derivative models available. They’re not necessarily designed for the general public to be aware of or use but for developers to experiment with their capabilities as they become better with each new version.
If you want to learn how to use these models, OpenAI founder John Schulman recently gave an address in a UC Berkeley lecture. UC Berkeley, which you can listen to or read more about here.
One of the topics he talks about is the current generation of LLMs that are notorious for committing to lie mainly because they aren’t able to make a decision, such as saying, “I’m not actually sure about that one.” Schulman believes that reinforcement learning using humans’ feedback (that’s RLHF, and StableVicuna is one of the models that utilize it) is a key component in the answer, provided there’s any solution. Check out the video below:
At Stanford at Stanford, there’s an intriguing use of algorithms for optimization (whether it’s machine learning or not is an individual choice, I’m thinking) in the field of smart agriculture. It’s important to reduce waste for irrigation. Even simple questions such as “Where should I put my sprinklers?” are becoming quite complicated depending on how precise you’d like to be.
How close can you get? In museums, they usually inform you. However, you can only travel further to the renowned Panorama of Murten, a massive painting with a size of 10 meters by 100 meters that was once displayed in an octagonal structure. EPFL and Phase One are working together to create what they say is the biggest digital image ever made, which is 150 megapixels. I’m sorry, 150 megapixels divided by 127,000, roughly 19… petapixels? It could have blown it by just a few hundredths of an order.
This project is amazing for anyone who loves panoramas, but it will also be a fascinating, extremely close analysis of each object and painting particulars. Machine learning has huge potential to restore these works and systematic learning and browsing.
Let’s put it in the category of living things. Still, every machine learning engineer will be able to tell you that despite their apparent abilities, AI models are quite slow to learn. Yes, academically, but spatially autonomous models could require exploring a place hundreds of times over several hours to acquire even the most basic knowledge of its surroundings. However, a mouse could do it in just a couple of minutes. What’s the reason? Researchers from University College London are investigating this issue and have suggested that animals utilize a small feedback loop to determine what’s essential about an environment, making the exploration process more selective and controlled. If we can help AI use this method, it’ll be much more efficient when it comes to moving around the house if this is indeed the goal we’re trying to do.
In the end, despite the enormous potential for conversational AI and generative artificial intelligence in gaming… it’s not there yet. In reality, Square Enix has brought the medium back 30 years by releasing its “AI Tech Preview” version of a classic point-and-click game, Portopia Serial Murder Case. Portopia Serial Murder Case. The attempt to incorporate natural language has been a complete failure in every way, making the game one of Steam’s most poorly reviewed games. There’s nothing better than talking with my friends through Shadowgate and The Dig; however, this is a challenging beginning.