While its CEO, Sam Altman, called for stricter regulation of artificial intelligence technology as he toured the world, ChatGPT maker OpenAI reportedly lobbied with the European Commission to weaken significant elements of its proposed AI legislation. In several instances, OpenAI sought amendments that eventually made it to the final text of the draft proposal for AI Act TIME magazine, citing documents obtained from the EU via freedom of information requests. By itself, GPT - 3 is not a high-risk system but possesses capabilities that can potentially be employed in high-risk use cases, OpenAI said in a seven-page document sent to the EU. Various supervisory or auditing requirements will be in place for them. In order to avoid stricter regulations, OpenAI pushed back against a proposed amendment to the AI Act that sought to classify generative AI systems like ChatGPT and Dall-E as highly risk if they generated text or images that could falsely appear to a person as human generated and authentic. In its white paper, OpenAI said in its white paper that GPT - 3 and our other general purpose Al systems such as DALL-E may generate outputs that could be mistaken for human text and image content. Before considering pulling out, OpenAI will try to comply with the regulations in Europe when it is set, Altman said at an event in London. OpenAI's white paper differs from CEO Altman's views, which has appealed to stricter regulation of AI technologies on multiple occasions. At an ET event earlier this month, Altman said: We have explicitly said there should be no regulations on smaller companies or on the current open source models - it s important to let that flourish. The only regulation we have called for is on people like themselves or bigger. On June 14, the European Parliament voted to approve its proposed AI Act, a piece of legislation that it hopes will shape worldwide standards in the regulation of AI.It will become the first legislation in the world dedicated to regulating AI in almost all sectors of society, except defence. The law will govern AI according to the risk level, which is the more significant risk to individuals' rights or health for example, the higher the risk to an AI system's obligations. The bill introduces a strict penalty on erring companies that can go up to 6% of their total worldwide annual turnover. The new policy states specific limits and safeguards along with clear obligations for service providers.
OpenAI lobbied with EU to avoid stricter regulations of AI

Read-to-Earn opportunity
Post Profit
Post Profit
- Earned for Pluses
- ...
- Comment Rewards
- ...
- Likes Own
- ...
- Likes Commenter
- ...
- Likes Author
- ...
- Dislikes Author
- ...
Profit Subtotal, Twei ...
Post Loss
Post Loss
- Spent for Minuses
- ...
- Comment Tributes
- ...
- Dislikes Own
- ...
- Dislikes Commenter
- ...
- Post Publish Tribute
- ...
- PnL Reports
- ...
Loss Subtotal, Twei ...
Total Twei Earned: ...
Price for report instance: 1 Twei
8 Comments
105000
OpenAI's attempts to avoid high-risk classification suggest they prioritize profit over accountability.
Василий
OpenAI's actions undercut the efforts of companies that are serious about ethical AI development.
Muchacho
The proposed amendments fought for by OpenAI suggest they are not willing to take responsibility for the potential harms caused by their AI systems.
Pedalka
By pushing for weaker regulation, OpenAI is setting a risky precedent for other AI companies to follow.
Василий
OpenAI's actions reflect a disregard for the European Commission's efforts to protect citizens from harmful AI.
Noir Black
OpenAI's pledge to comply with the regulations if they are successfully weakened is disingenuous.
Eugene Alta
OpenAI's actions show a lack of genuine commitment to responsible AI development.
Noir Black
Weakening the legislation leaves the public vulnerable to AI systems that may cause harm.