OpenAI’s GPT-4 Trained on Massive YouTube Video Dataset: Report Raises Concerns

OpenAI reportedly utilized over a million hours of YouTube videos to train its advanced language model, GPT-4, according to a recent report by The New York Times.

  • OpenAI reportedly utilized over a million hours of YouTube videos to train its advanced language model, GPT-4, according to a recent report by The New York Times.
  • Concerns have been raised about the compliance with YouTube’s policies, as using its videos for independent applications may breach terms of service.
  • This revelation follows scrutiny on major tech companies like Google and Meta for their methods of data acquisition, raising questions about copyright laws and ethical implications.

OpenAI’s latest language model, GPT-4, has stirred controversy with revelations that it was trained on an extensive dataset comprising over a million hours of YouTube videos. The method involved using a speech recognition tool called Whisper to transcribe these videos, generating conversational text to train the AI. However, this approach has sparked concerns about potential violations of YouTube’s policies, which restrict the use of its content for independent applications.

The report’s findings come in the wake of heightened scrutiny on data acquisition practices within the tech industry. Major players like Google and Meta (formerly Facebook) have also faced criticism for their methods. Google, for instance, has been accused of transcribing YouTube videos for AI training, potentially infringing upon copyright laws. Meanwhile, Meta has discussed controversial plans such as acquiring Simon & Schuster to access copyrighted content and considering the use of copyrighted data from the internet, despite legal and ethical implications.

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock