Firen Word Generator
Process returned 0
Noteworthy nodes in each datafile include:
|Root nodes (Click a root to generate from it)
|More information about Firen can be found on the Wiki.
|Sajem Tan is a collaborative conlang. It has a website here.
|My (possibly poorly-considered) attempt to encode basic English grammar in WordGen. I apologise in advance to anyone who tries to make sense out of it.
|Dab vi Suxi Kidap
|DVSK is a very simple isolating language that was created as a collaboration between me and 4 other people from the Sajem Tan tribe, however it was abandoned after working out the foundations.
|Another collaborative language in the Sajem Tan universe. It is the source of triconsonantal roots in Sajem Tan.
|A musical language used in the same setting as Firen. It is currently much less well-developed.
|Someone on Mastodon posted a silly CFG for making gender jokes, so I encoded it as a WordGen datafile. Nothing more to it.
|This is one of the first files I ever wrote, and it shows. It makes use of outdated and deprecated features of WordGen and made the very questionable choice of using 'val' for a phonetic English reading of the number and 'ipa' for the digits.
|This file exists as a testing ground for things that are
too simple to need their own files, and for new or experimental
features. You will need to uncrease the recursion depth to use some of
these roots, particularly
Node or else
get a million errors.
Note that CFGs.yml is not allowed on this web interface due to higher resource use than the other files and its reliance on WordGen/Cpp features.
This is the web frontend for a Python program that will produce random words using a (rather nifty) weighted-randomized macro expansion approach. IPA transcriptions are generated from the same file, and are not directly attached to the orthography. This means that "digraph recognition" is not even a concept to worry about.
In a second phase, regular expressions and Mealy-type finite state machines are applied to transform the output.
The Firen datafile is generally quite well-developed, and produces generally good results. The IPA transcriptions are sometimes non-obvious because they include synchronic sound changes, and sometimes unnatural but generally still correct, such as with the overzealous syllabification.
The other datafiles are in various stages of development.
Not that it matters or anything, but unless you provide your own seeds, this web frontend has worse randomness because it is simply using Unix time as the seed. (It's required that the server generates the seed for the permalink to work, and time is the standard easy choice for these things.) When run from the command line without an explicit seed parameter, the randomness is much better (Python seeds its random generator from the system's main entropy source). Maybe I could make this Base64-encode some bytes from /dev/urandom or something for the seed instead, it wouldn't change too much.
Working-1.py is a less flexible earlier (Python 2 only) draft,
which technically knows nothing about words, and only generates syllables.
You may find it interesting or even useful.
the data file for that version. The two versions are not compatible, but
are mostly similar and a single file could in theory be agnostic between
Once this is "done", my next plan is to implement something with Markov chains, the more classical way to generate natural language.Top