Acta Lingweenie

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna

frislander-deactivated20220305 asked:

Hi, sorry to bring up something from ages ago, but I was wondering if we might see some stuff released on the Usandu language for Grey Goo. It's been a while and the game looks like it might be on the way out (the game website hasn't been updated in over a year) so I wonder if it might be possible to negotiate releasing a grammar say into the public domain.

I’ll chat with my contacts there to see what they think.

Anonymous asked:

hi, strange request: would it be easy for you to save the conlanger's thesaurus into .epub format, that we may carry it wherever we go? the .pdf version that's up right now is not converting in calibre and refuses to open on my nook.

I have no idea. It is written in LaTeX and might take some trickery to convert. I’ll do some digging.

gacorley
pearwaldorf

Throughout her translation of the “Odyssey,” Wilson has made small but, it turns out, radical changes to the way many key scenes of the epic are presented — “radical” in that, in 400 years of versions of the poem, no translator has made the kinds of alterations Wilson has, changes that go to truing a text that, as she says, has through translation accumulated distortions that affect the way even scholars who read Greek discuss the original. These changes seem, at each turn, to ask us to appreciate the gravity of the events that are unfolding, the human cost of differences of mind.

The first of these changes is in the very first line. You might be inclined to suppose that, over the course of nearly half a millennium, we must have reached a consensus on the English equivalent for an old Greek word, polytropos. But to consult Wilson’s 60 some predecessors, living and dead, is to find that consensus has been hard to come by…

Of the 60 or so answers to the polytropos question to date, the 36 given above [which I cut because there were a lot] couldn’t be less uniform (the two dozen I omit repeat, with minor variations, earlier solutions); what unites them is that their translators largely ignore the ambiguity built into the word they’re translating. Most opt for straightforward assertions of Odysseus’s nature, descriptions running from the positive (crafty, sagacious, versatile) to the negative (shifty, restless, cunning). Only Norgate (“of many a turn”) and Cook (“of many turns”) preserve the Greek roots as Wilson describes them — poly(“many”), tropos (“turn”) — answers that, if you produced them as a student of classics, much of whose education is spent translating Greek and Latin and being marked correct or incorrect based on your knowledge of the dictionary definitions, would earn you an A. But to the modern English reader who does not know Greek, does “a man of many turns” suggest the doubleness of the original word — a man who is either supremely in control of his life or who has lost control of it? Of the existing translations, it seems to me that none get across to a reader without Greek the open question that, in fact, is the opening question of the “Odyssey,” one embedded in the fifth word in its first line: What sort of man is Odysseus?

“I wanted there to be a sense,” Wilson told me, that “maybe there is something wrong with this guy. You want to have a sense of anxiety about this character, and that there are going to be layers we see unfolded. We don’t quite know what the layers are yet. So I wanted the reader to be told: be on the lookout for a text that’s not going to be interpretively straightforward.”

Here is how Wilson’s “Odyssey” begins. Her fifth word is also her solution to the Greek poem’s fifth word — to polytropos:

Tell me about a complicated man.
Muse, tell me how he wandered and was lost
when he had wrecked the holy town of Troy,
and where he went, and who he met, the pain
he suffered in the storms at sea, and how
he worked to save his life and bring his men
back home. He failed to keep them safe; poor fools,
they ate the Sun God’s cattle, and the god
kept them from home. Now goddess, child of Zeus,
tell the old story for our modern times.
Find the beginning.

When I first read these lines early this summer in The Paris Review, which published an excerpt, I was floored. I’d never read an “Odyssey” that sounded like this. It had such directness, the lines feeling not as if they were being fed into iambic pentameter because of some strategic decision but because the meter was a natural mode for its speaker. The subtle sewing through of the fittingly wavelike W-words in the first half (“wandered … wrecked … where … worked”) and the stormy S-words that knit together the second half, marrying the waves to the storm in which this man will suffer, made the terse injunctions to the muse that frame this prologue to the poem (“Tell me about …” and “Find the beginning”) seem as if they might actually answer the puzzle posed by Homer’s polytropos and Odysseus’s complicated nature.

Complicated: the brilliance of Wilson’s choice is, in part, its seeming straightforwardness. But no less than that of polytropos, the etymology of “complicated” is revealing. From the Latin verb complicare, it means “to fold together.” No, we don’t think of that root when we call someone complicated, but it’s what we mean: that they’re compound, several things folded into one, difficult to unravel, pull apart, understand.

“It feels,” I told Wilson, “with your choice of ‘complicated,’ that you planted a flag.”

“It is a flag,” she said.

“It says, ‘Guess what?’ — ”

“ ‘ — this is different.’ ”

The First Woman to Translate the Odyssey Into English, Wyatt Mason

curliestofcrowns

@inaneenglish

inaneenglish

The farther I get in Wilson’s translation, the less I reach for my Fitzgerald copy to compare. Hers is so wonderful in its clarity and proof that language doesn’t have to be highly elevated to be beautiful. The Odyssey is a much more intimate poem than the Iliad, and omg Wilson digs into that so well.

feuervogel

Can I borrow your copy over the summer?? I want to read it so much.

marithlizard

Hm, I’ve never actually read the whole Odyssey, and this translation is making the idea seem appealing!

mizkit

I really want to do a group read of Wilson’s Odyssey but I don’t want to be the one who organizes it. :)

taraljc

SAME! Translating the Aenaid in HS put me off epic poems, but I’d LOVE to read Wilson’s Odyssey.

gacorley

@acta-lingweenie thoughts?

acta-lingweenie

She’s doing the Muses’ Work. Old High Translationese is a blight.

Source: The New York Times
dedalvs
dedalvs

If you want to catch a glimpse of the largest, most detailed, oldest artlang you’ve never heard of, take a look at this.

aiweirdness

Ancient wisdom from the neural network

lewisandquark

What happens when really old advice meets really new technology?

A recurrent neural network (like the open-source char-rnn framework used here) can teach itself to imitate recipes, paint colors, band names, and even guinea pig names. By examining a dataset, it learns to formulate its own rules about it, and can use these rules to generate new text that - according to the neural network - resembles the dataset. But since the neural network is doing all this without cultural context, or any knowledge of what the words really mean, the results are often a bit bizarre.

In this example, the dataset is a list of more than 2000 ancient proverbs, collected by reader Anthony Mandelli. Some of these are well-known, such as “You can lead a horse to water, but you can’t make it drink.” and “Where there’s a will, there’s a way.” Others are frankly a bit strange: “Where there’s muck there’s brass.” and “A curst cow has short horns.” and “Be not a baker if your head is made of butter.”

What will a neural network make of this ancient wisdom?

If you answered “Really really weird proverbs”, you are correct.

A fox smells it better than a fool’s for a day.
No songer in a teacuper.
A fool in a teacup is a silent for a needle in the sale.
No man is the better pan on the hunder.
A mouse is a good bound to receive.
Do not come to the cow.

Some of them almost make sense:

A good wine makes the best sermon.
A good fear is never known till needed.
Death when it comes will have no sheep.
An ounce of the heart comes without an exception.
A good face is a letter to get out of the fire.
No wise man ever wishes to be sick.
A good excuse is as good as a rest.
There is no smoke without the best sin.
A good man is worth doing well.
A good anvil does not make the most noise.

While others would be more difficult to pass off as real proverbs:

We can serve no smort.
A good face is a letter like a dog.
A good earse makes a good ending.
Gnow will not go out.
Ung.
A fox smeep is the horse of the best sermon.
No sweet is half the barn door after the cat.
There is not fire and step on your dog and stains the best sermon.
An ox is a new dogn not sing in a haystar.

One of the oddest things to emerge from the proverb-trained neural network is a strange obsession with oxen. I checked, and there were only three oxen-related proverbs in the dataset, yet they appear frequently in the neural network’s version, and usually as rather powerful creatures.

An ox can lever an enemies are dangerous and restens at home.
An ox is not to be given with a single stone.
An ox is never known till needed.
An ox is as good as a best.
An ox is not to be that wound is hot.
An ox is a silent for the gain of the bush.
An ox is not fill when he will eat forever.

Whatever the internal mythos the neural network has learned from these ancient proverbs, oxen are mysteriously important.

aiweirdness

Paint colors designed by neural network, Part 2

lewisandquark

image

So it turns out you can train a neural network to generate paint colors if you give it a list of 7,700 Sherwin-Williams paint colors as input. How a neural network basically works is it looks at a set of data - in this case, a long list of Sherwin-Williams paint color names and RGB (red, green, blue) numbers that represent the color - and it tries to form its own rules about how to generate more data like it. 

Last time I reported results that were, well… mixed. The neural network produced colors, all right, but it hadn’t gotten the hang of producing appealing names to go with them - instead producing names like Rose Hork, Stanky Bean, and Turdly. It also had trouble matching names to colors, and would often produce an “Ice Gray” that was a mustard yellow, for example, or a “Ferry Purple” that was decidedly brown.  

These were not great names.

image

There are lots of things that affect how well the algorithm does, however.

One simple change turns out to be the “temperature” (think: creativity) variable, which adjusts whether the neural network always picks the most likely next character as it’s generating text, or whether it will go with something farther down the list. I had the temperature originally set pretty high, but it turns out that when I turn it down ever so slightly, the algorithm does a lot better. Not only do the names better match the colors, but it begins to reproduce color gradients that must have been in the original dataset all along. Colors tend to be grouped together in these gradients, so it shifts gradually from greens to browns to blues to yellows, etc. and does eventually cover the rainbow, not just beige.

Apparently it was trying to give me better results, but I kept screwing it up.

Raw output from RGB neural net, now less-annoyed by my temperature setting

image

People also sent in suggestions on how to improve the algorithm. One of the most-frequent was to try a different way of representing color - it turns out that RGB (with a single color represented by the amount of Red, Green, and Blue in it) isn’t very well matched to the way human eyes perceive color.

These are some results from a different color representation, known as HSV. In HSV representation, a single color is represented by three numbers like in RGB, but this time they stand for Hue, Saturation, and Value. You can think of the Hue number as representing the color, Saturation as representing how intense (vs gray) the color is, and Value as representing the brightness. Other than the way of representing the color, everything else about the dataset and the neural network are the same. (char-rnn, 512 neurons and 2 layers, dropout 0.8, 50 epochs)

Raw output from HSV neural net:

image

And here are some results from a third color representation, known as LAB. In this color space, the first number stands for lightness, the second number stands for the amount of green vs red, and the third number stands for the the amount of blue vs yellow.

Raw output from LAB neural net:

image

It turns out that the color representation doesn’t make a very big difference in how good the results are (at least as far as I can tell with my very simple experiment). RGB seems to be surprisingly the best able to reproduce the gradients from the original dataset - maybe it’s more resistant to disruption when the temperature setting introduces randomness.

And the color names are pretty bad, no matter how the colors themselves are represented.

However, a blog reader compiled this dataset, which has paint colors from other companies such as Behr and Benjamin Moore, as well as a bunch of user-submitted colors from a big XKCD survey. He also changed all the names to lowercase, so the neural network wouldn’t have to learn two versions of each letter.

And the results were… surprisingly good. Pretty much every name was a plausible match to its color (even if it wasn’t a plausible color you’d find in the paint store). The answer seems to be, as it often is for neural networks: more data.

Raw output using The Big RGB Dataset:

image

I leave you with the Hall of Fame:

RGB:

image

HSV:

image

LAB:

image

Big RGB dataset:

image