The New Yorkerのインスタグラム(newyorkermag) - 10月7日 04時30分


Large language models, like OpenAI’s ChatGPT and Google’s Bard, are completing a process begun by Google's OneBox: their goal is to ingest the Web so comprehensively that it might as well not exist, James Somers writes. In the early 2010s, OneBox—a feature that provided searchers with instant answers above search results—siphoned traffic from the very sites that made it possible. Many Web 2.0 darlings, like Yelp and Stack Overflow, dense with user-generated content, saw visitor numbers decline.

In creating ChatGPT, Wikipedia was OpenAI’s most important data set, followed by Reddit. The sites that have fuelled chatbots function like knowledge economies, brokering information to where it is most needed, and chatbots are already thinning out the demand side of these marketplaces, starving the human engines that created the knowledge in the first place.

But it’s also a problem for A.I. It’s possible that A.I. models can only hoover up the whole Web once. If they are to continue getting smarter, they will need new reservoirs of knowledge. Where will it come from? At the link in our bio, read more about how A.I. might continue to evolve—and the implications that come with this potential growth.

Illustration by @vivek_thakker.


[BIHAKUEN]UVシールド(UVShield)

>> 飲む日焼け止め!「UVシールド」を購入する

2,130

19

2023/10/7

Lucky Brand Jeansのインスタグラム
Lucky Brand Jeansさんがフォロー

The New Yorkerを見た方におすすめの有名人