OpenAI CEO Sam Altman unveiled ChatGPT Agent, a powerful new AI product that performs complex tasks using its own computing environment. While praised for its utility, Altman warns users about the tool’s experimental nature and security risks, urging minimal access for sensitive tasks.
Sam Altman, CEO of OpenAI (Source: X, Unsplash) (Source: twitter)
Sam Altman, CEO of OpenAI, took to X to announce the launch of a new product called ChatGPT Agent.
Agent represents a new level of capability for AI systems and can accomplish some remarkable, complex tasks for you using its own computer. It combines the spirit of Deep Research and Operator, but is more powerful than that may sound—it can think for a long time, use some tools, think some more, take some actions, think some more, etc.
During the launch, they showed a demo of preparing for a friend’s wedding: buying an outfit, booking travel, choosing a gift, etc. Also, analysing data and creating a presentation for work was shown.
They shared on Instagram that this feature is rolling out to Plus, Pro, and Team users.
https://www.instagram.com/p/DMN4sXjJjeX/?hl=en
Although the utility is significant, Altman warns people of potential risks.
He said that they have built a lot of safeguards and warnings into it, and broader mitigations than they have ever developed before, from robust training to system safeguards to user controls. Yet, everything can’t be predicted; hence, he emphasised to warn users heavily and give users freedom to take actions carefully if they want to. He views it as experimental and advises against using it for high-stakes scenarios or sharing sensitive personal information until further studies and real-world improvements are made.
https://x.com/sama/status/1945900345378697650
He said, ‘’We don’t know exactly what the impacts are going to be, but bad actors may try to “trick” users’ AI agents into giving private information they shouldn’t and take actions they shouldn’t, in ways we can’t predict. We recommend giving agents the minimum access required to complete a task to reduce privacy and security risks.
For example, I can give Agent access to my calendar to find a time that works for a group dinner. But I don’t need to give it any access if I’m just asking it to buy me some clothes. There is more risk in tasks like “Look at my emails that came in overnight and do whatever you need to do to address them, don’t ask any follow-up questions”. This could lead to untrusted content from a malicious email tricking the model into leaking your data.”
They believe it is essential to start learning from real-world experiences and for individuals to adopt these tools gradually and thoughtfully, as they work to better identify and manage the potential risks involved.
People have actively engaged with the post made on X, expressing their excitement to explore the features of this new product. However, some have also voiced their concerns about potential risks and question whether this is going to be a safety disaster. Additionally, many suggest that the name ‘ChatGPT Agent’ could have been more creative.
By continuing you agree to our Privacy Policy & Terms & Conditions