This site uses Akismet to reduce spam. Learn how your comment data is processed.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
2 min in and I have no idea what I am watching..
Paging /u/SolidGoldMagikarp
Doesn’t look like anything to me
@12:00 made me think of Everything, Everywhere, All at Once
TL;DW language models spit out weird results when given tokens that don’t have a similarity/meaning cluster. Tokens are supposed to represent frequently-occurring data but due to bad sampling and data culling these tokens represent rarely-occurring data. When the model encounters them, it gives very precise but very bad results, because they’re not “near” any meaningful/similar clusters.