from Balaji | by Balaji

Balaji

@balajis

over 2 years ago

View on X

LLMs often generate factually incorrect instructions. Following such instructions in the physical world means running into an issue. And the longer the list of instructions, the more likely an issue eventually arises. So: plausible texts fail empirical tests.

For digital instructions (like a computer program), you can take LLM output and instantly check to see if it at least compiles. But physical instructions can’t be checked as easily, unless it’s an easily simulated problem.

More from @balajisReply on X

Page created with TweetHunter

Write your own