Key Takeaway
OpenAI’s ChatGPT can assist with tasks like reviewing team documents and conducting competitive research, but its capabilities are limited for security reasons. It cannot execute code, download files, or access local systems. When dealing with sensitive sites, it pauses for user oversight. OpenAI acknowledges the risks of hidden malicious instructions that could compromise security, despite extensive testing. They admit that their safeguards may not prevent all potential attacks as AI agents become more prevalent. Users have control over browser memories, which can be reviewed or deleted at any time.
At work, OpenAI states, “you can ask ChatGPT to open and review past team documents, conduct new competitive research, and compile insights into a team brief.”
OpenAI is maintaining strict control over the agent’s capabilities.
It cannot execute code in browsers, download files, or install extensions, nor can it access other applications or local file systems.
When it encounters sensitive sites, such as financial platforms, the agent pauses to allow the user to monitor its activity.
The company acknowledges the security risks, noting that “agents are vulnerable to hidden malicious instructions, which may be embedded in places like a webpage or email, intending to override the ChatGPT agent’s intended behavior.”
Such exploits could lead to data exposure or unintended actions.
Despite conducting thousands of hours of security testing, OpenAI admits that “our safeguards will not prevent every attack that arises as AI agents gain popularity.”
How privacy controls give users the final say
Browser memories are an optional feature.
Users can review or archive them at any time within the settings, and OpenAI confirms that “deleting browsing history removes any associated browser memories.”








100 Comments