Skip to content
/ lefex Public

A tool for extraction of lexical features from text based on UIMA and MapReduce

License

Notifications You must be signed in to change notification settings

uhh-lt/lefex

Repository files navigation

lefex: A Tool for LExical FEature eXtraction

This project contains Hadoop jobs for extraction of features of words and texts. Currently, the following types of features can be extracted:

  1. CoNLL. Given a set of HTML documents in the CSV format url<TAB>s3-path<TAB>html-document and outputs the dependency parsed documents in the CoNLL format. See the de.uhh.lt.lefex.CoNLL.HadoopMain class.
  2. ExtractTermFeatureScores. Given a corpus in plain text format, extract word count (word<TAB>count), feature count (feature<TAB>count), and word-feature count (word<TAB>feature<TAB>count) and save these into CSV files. This job is used for feature extraction in the JoSimText project: the computation of distributional thesaurus can be performed taking as input the output of this job. See the de.uhh.lt.lefex.ExtractTermFeatureScores.HadoopMain class.
  3. ExtractLexicalSampleFeatureScores. Given a lexical sample dataset for word sense disambiguation in CSV format, extract features of the target word in context and add them as an extra column. Currently, the system supports extraction of three types of features of a target word: co-occurrences, dependency features, and trigrams. See the de.uhh.lt.lefex.ExtractLexicalSampleFeatures.HadoopMain class.
  4. SentenceSplitter. This job take a plain text corpus as an input and outputs a file with exactly one sentence per line. See the de.uhh.lt.lefex.SentenceSplitter.HadoopMain class.

To build the project you may need to install a JoBimText jar file which contains a custom (non mavenified) dependency collapsing UIMA annotator. To do it use the following script.

Releases

No releases published

Packages

No packages published