Some of you may not remember this, but Apple has been doing text to speech for ages. We don't quite know when it was introduced (though we suppose a quick Google search could solve that) but the point is, it's been around for a long time. Since many of our schools used Apple computers, we discovered text-to-speech in the mid-90s. What can Apple be planning now?
Given, it's been several years since we've played with the feature, because we are fortunate enough not to be visually impaired, and the novelty of making our computer say naughty things has worn off.
But if this patent filing is any indication of the things Apple hopes to do, we may have a use for text to speech again:
Algorithms for synthesizing speech used to identify media assets are provided. Speech may be selectively synthesized from text strings associated with media assets, where each text string can be associated with a native string language (e.g., the language of the string). When several text strings are associated with at least two distinct languages, a series of rules can be applied to the strings to identify a single voice language to use for synthesizing the speech content from the text strings. In some embodiments, a prioritization scheme can be applied to the text strings to identify the more important text strings. The rules can include, for example, selecting a voice language based on the prioritization scheme, a default language associated with an electronic device, the ability of a voice language to speak text in a different language, or any other suitable rule.
If that's what it sounds like users can have text read to them in various languages, but the neat part is the end "the ability of a voice language to speak text in a different language."
Imagine having a document in a language you don't fully understand and being able to translate it. Of course the translation will be a bit rough, but it could be a cool future if it becomes a reality.