Print

Print


Real-time captioning of conversation 
<https://www.bloomberg.com/news/articles/2022-05-11/google-shows-early-preview-of-augmented-reality-glasses>. 
Highly accurate instant translation 
<https://techcrunch.com/2022/03/22/microsoft-improves-its-ai-translations-with-z-code/>. 
Auto voice mimicry 
<https://www.theverge.com/22672123/ai-voice-clone-synthesis-deepfake-applications-vergecast> 
making it sound like you 
<https://voicebot.ai/2022/03/29/respeecher-offers-celebrities-ukrainian-speaking-voice-clones-for-video-messages-supporting-ukraine/> 
speaking the translation. Real-time AR facial augmentation 
<https://arxiv.org/abs/2007.14808> making it also 
<https://www.youtube.com/supported_browsers?next_url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DL1StbX9OznY>/look/ 
like you <https://www.youtube.com/watch?v=L1StbX9OznY> speaking the translation. 
Meanwhile, super-intelligent Turing-test-passing 
<https://www.independent.co.uk/tech/ai-deepmind-artificial-general-intelligence-b2080740.html> 
chatbots that look real 
<https://www.scientificamerican.com/article/humans-find-ai-generated-faces-more-trustworthy-than-the-real-thing/> 
and can talk tirelessly about any topic, in different languages, in anyone’s voice 
<https://techcrunch.com/2021/04/16/ai-driven-audio-cloning-startup-gives-voice-to-einstein-chatbot/>. 
Then, a little further into the future, brain-machine interfaces 
<https://www.science.org/content/article/first-brain-implant-lets-man-complete-paralysis-spell-out-thoughts-i-love-my-cool-son> 
that turn your thoughts into language, saving you the effort of talking at all. All 
this will bring us into the ‘human-machine era’, a time when the tech has moved out 
of our hands and into our ears, eyes, and brains.

These technologies are the subject of vast (and competing) venture capital. They are 
coming. When these are no longer futuristic but widespread everyday devices, what 
will language and interaction actually be like? Would you trust instant 
auto-translation while shopping? On a date? At a hospital? How much would you 
interact with virtual characters? Debate with them? Learn a new language from them? 
Socialise with them 
<https://www.cnet.com/tech/services-and-software/no-joke-googles-ai-is-smart-enough-to-understand-your-humor/>, 
or more 
<https://gizmodo.com/vr-researches-simulate-kisses-with-ultrasonic-transduce-1848849489>? 
Would you wear a device that lets you communicate without talking? And with all this 
new tech, would you trust tech companies 
<https://www.theverge.com/2022/4/28/23047026/amazon-alexa-voice-data-targeted-ads-research-report> 
with the bountiful new data 
<https://www.pcgamer.com/oculus-will-sell-you-a-quest-2-headset-that-doesnt-need-facebook-for-an-extra-dollar500/> 
they gather?

Meanwhile, what about the people who get left behind as these shiny new gadgets 
spread? As always with new tech, they will be prohibitively expensive for many. And 
despite rapid improvements 
<https://techcrunch.com/2022/03/22/microsoft-improves-its-ai-translations-with-z-code/>, 
still for some years progress will be slower 
<https://www.bbc.com/future/article/20210322-the-languages-that-defy-auto-translate> 
for smaller languages 
<https://ai.googleblog.com/2022/05/24-new-languages-google-translate.html> around the 
world – and much slower still for sign language 
<https://aclanthology.org/2021.mtsummit-at4ssl.4.pdf> – despite the hype 
<https://www.theatlantic.com/technology/archive/2017/11/why-sign-language-gloves-dont-help-deaf-people/545441/>. 


‘Language in the Human-Machine Era’ <https://lithme.eu/> is an EU-funded research 
network putting together all these pieces. Watch our animations 
<https://lithme.eu/animations-and-survey/> setting out future scenarios, read our 
open access forecast report <https://jyx.jyu.fi/handle/123456789/75737>, and 
contribute to our big survey <https://lithme.eu/animations-and-survey/>!

Please forward this message anywhere it might be of interest, and retweet us 
<https://twitter.com/LgHumanMachine/status/1529140180644397058>!

Technologically yours,
Dave

__________________
Dr. Dave Sayers, ORCID no. 0000-0003-1124-7132
Senior Lecturer & Docent, Dept Language & Communication Studies, U. Jyväskylä, Finland |www.jyu.fi
Chair, EU COST Action CA19102 'Language in the Human-Machine Era' |www.lithme.eu
Founder & Moderator, TeachLing |https://www.jiscmail.ac.uk/teachling
[log in to unmask]  |https://jyu.academia.edu/DaveSayers

########################################################################

To unsubscribe from the TEACHLING list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=TEACHLING&A=1

This message was issued to members of www.jiscmail.ac.uk/TEACHLING, a mailing list hosted by www.jiscmail.ac.uk, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/