As you know as a language industry and in the broader commercial world, we are all adapting to the impact, opportunities and realisation of AI as the next industrial revolution. 

As with any new tech adoptions, they go through a knowledge curve, as we understand what we need to learn about how to both optimise and mitigate risk in its use. 

How skills in jobs need to adapt and be developed to use the tools and re skill. 

There is much already written re AI and security, control and governance by the Big 5, Government and many influencers. 

But we thought this article from our own legal advisors DarwinGrayLLP captured some commonsense considerations particularly for our legal clients, which I’m sure is also being considered. 

Our concern is, if AI is being used to create documents and contracts, the Hallucination fact in particular, is very real. 

How many times have documents been scanned over vs fully read, proofread even. We may all be guilty of that if it’s a document we are familiar with, maybe? 

Then add in AI if translation is needed, without a human being involved to review and the risk of double layering error, changing context and cause intended, becomes even greater. 

In pursuit of efficiency, cost management and getting more from less, there is a challenge for our legal translation clients to maintain their own legal compliancy and accuracy in the information they present.  

And I know that many are focusing their attention to risk management within their AI policies and governance. 

But please, when looking at the bigger picture, don’t forget that the further a human is from the content, the more the risk becomes. 

https://www.darwingray.com/5-hidden-dangers-of-using-ai-to-draft-contracts-a-cautionary-guide-for-businesses/ 

#LegalTranslation #AI #Compliancy #riskmitigation #AIhybridtranslation #hallucinationeffect #Translationagency #languageserviceprovider #Languageindustry #humaninput