Gogo Sempai@programming.dev to Comic Strips@lemmy.world · 2 年前Didn't have ChatGPT back in the day to cook up professional-sounding paragraphs from bullet pointsprogramming.devimagemessage-square28fedilinkarrow-up1666arrow-down18
arrow-up1658arrow-down1imageDidn't have ChatGPT back in the day to cook up professional-sounding paragraphs from bullet pointsprogramming.devGogo Sempai@programming.dev to Comic Strips@lemmy.world · 2 年前message-square28fedilink
minus-squareSpaceNoodle@lemmy.worldlinkfedilinkarrow-up42arrow-down1·2 年前What a terrible way to waste everyone’s time.
minus-squarePechente@feddit.delinkfedilinkEnglisharrow-up30arrow-down2·edit-22 年前Yeah but seems more of a cultural issue than ChatGPT’s issue if businesses expect emails to have a certain form.
minus-squareoldGregg@lemm.eelinkfedilinkarrow-up15arrow-down1·2 年前The recipiant just copies the message intp chatGPT and asks it for the summary. Its like a shitty cypher
minus-squareGogo Sempai@programming.devOPlinkfedilinkarrow-up16arrow-down2·2 年前What’s becoming mainstream these days: Sender uses ChatGPT/Copilot/Bard to turn content summary into a big professional Email. Receiver uses ChatGPT/Copilot/Bard to break down the big professional email into summary. Time is saved but what a wastage of electricity (LLMs need GPU computation for faster output)!
minus-squarejscummy@sh.itjust.workslinkfedilinkarrow-up6·2 年前Is time saved though? Sounds like two useless steps have been added, with an extra layer of translation that could cause misunderstandings
What a terrible way to waste everyone’s time.
Yeah but seems more of a cultural issue than ChatGPT’s issue if businesses expect emails to have a certain form.
The recipiant just copies the message intp chatGPT and asks it for the summary.
Its like a shitty cypher
What’s becoming mainstream these days:
Sender uses ChatGPT/Copilot/Bard to turn content summary into a big professional Email.
Receiver uses ChatGPT/Copilot/Bard to break down the big professional email into summary.
Time is saved but what a wastage of electricity (LLMs need GPU computation for faster output)!
Is time saved though? Sounds like two useless steps have been added, with an extra layer of translation that could cause misunderstandings