

(We’re going to need an AI community lol instead of posting to genzedong all the time)
I had made c/Singularity for things like this but most of the talks about AI/LLMs are on c/technology as it’s big.
then that’s additional context the LLM can use to understand what it’s translating by reading the key property.
I was just thinking that it would be easy to remove the keys for reduced tokens but treating it as extra context for the LLM makes a lot of sense.
Nice job!
And two questions: Do you ask the LLM after it’s done if there were mistakes or if there is anything that can be improved as well? And as for books and longer texts do you have to break them up or do you keep to things that can be done one go?
lol
In some tests I was doing with using Qwen 0.6B LLMs for classification I did ask it multiple times and basically give more weight the more tries something appears in. In your case you can probably ask two different models and take anything translated equally both times as “good enough” and use an(other) LLM to check the remainining things, although the longer the sentence/text/key the less such a system is likely to help and the more the raw LLM abilities will be necessary.
And as for asking the LLMs for mistakes I was curious because big LLMs should be able to catch some mistakes due to reflexion…