A hacker gained access to OpenAI's internal forums, where the company's technologies were discussed. The hack allegedly took place in April 2023, and the company responsible for developing the popular ChatGPT has not disclosed the fact.
This situation has raised concerns about the security of artificial intelligence technology and its potential risks, even though, according to sources, there is no evidence that user data or product codes have been stolen.
OpenAI did not disclose attack

OpenAI chose not to report the attack to the authorities, believing that the hacker acted alone and without malicious intent. However, the intrusion has provoked internal disagreements. ChatGPT employees expressed fears that AI technologies could be stolen by countries that are enemies of the United States.
OpenAI's choice has been criticized for a number of reasons:
- Lack of Transparency: OpenAI was criticized for its initial lack of transparency about the attack, which led to questions about its commitment to security and ethics.
- Vulnerabilities to Espionage: The case exposes the possibility of governments or rival companies stealing AI secrets for malicious purposes, such as the development of cyber weapons or industrial espionage.
- Debate on AI Regulation: The incident revived the discussion on the need for global protocols and measures to ensure the responsible and ethical use of AI, especially in a scenario where technology is becoming increasingly powerful and complex.
OpenAI's former technical manager, Leopold Aschenbrenner, has been fired after warning about the need for more robust security measures. Aschenbrenner believes his departure was politically motivated, but OpenAI denies the claim.
ChatGPT hack raises concerns
The episode raises questions about the safety of AI at a crucial time for the development of the technology. Experts warn of the potential dangers of AI in the wrong hands, such as espionage, cyber warfare and mass manipulation.
The rapid evolution of AI requires strict safety measures and international collaboration to prevent misuse. Despite concerns, studies indicate that AI in its current state does not pose an imminent security risk.
The future of AI depends on being able to reconcile its enormous potential with ensuring that it is used ethically and responsibly.