Cat@ponder.cat to Technology@lemmy.worldEnglish · edit-210 个月前Perplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiexternal-linkmessage-square28fedilinkarrow-up1210arrow-down114
arrow-up1196arrow-down1external-linkPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiCat@ponder.cat to Technology@lemmy.worldEnglish · edit-210 个月前message-square28fedilink
minus-squarefruitycoder@sh.itjust.workslinkfedilinkEnglisharrow-up5·10 个月前Listen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines. They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).
minus-squareiopq@lemmy.worldlinkfedilinkEnglisharrow-up3arrow-down1·10 个月前I’m talking about the example texts
Listen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines.
They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).
I’m talking about the example texts