与OpenNLP(开放自然语言处理)函数库接口的库。并非所有功能都已实施。
其他信息/文档:
阅读Marginalia的来源
[clojure-opennlp " 0.5.0 " ] ; ; uses Opennlp 1.9.0
Clojure-Opennlp与Clojure 1.5+一起工作
( use 'clojure.pprint) ; just for this documentation
( use 'opennlp.nlp)
( use 'opennlp.treebank) ; treebank chunking, parsing and linking lives here
您需要使用模型文件来制作处理功能。这些假定您是从根项目目录中运行的。您也可以从http://opennlp.sourceforge.net/models-1.5上下载型号文件。
( def get-sentences ( make-sentence-detector " models/en-sent.bin " ))
( def tokenize ( make-tokenizer " models/en-token.bin " ))
( def detokenize ( make-detokenizer " models/english-detokenizer.xml " ))
( def pos-tag ( make-pos-tagger " models/en-pos-maxent.bin " ))
( def name-find ( make-name-finder " models/namefind/en-ner-person.bin " ))
( def chunker ( make-treebank-chunker " models/en-chunker.bin " ))
工具创建器是多方法,因此您还可以使用模型代替文件名创建任何工具(您可以使用SRC/OpenNLP/opennlp/tools/train.clj中的培训工具创建模型):
( def tokenize ( make-tokenizer my-tokenizer-model)) ; ; etc, etc
然后,使用您创建的功能在文本上执行操作:
检测句子:
( pprint ( get-sentences " First sentence. Second sentence? Here is another one. And so on and so forth - you get the idea... " ))
[ " First sentence. " , " Second sentence? " , " Here is another one. " ,
" And so on and so forth - you get the idea... " ]
令牌化:
( pprint ( tokenize " Mr. Smith gave a car to his son on Friday " ))
[ " Mr. " , " Smith " , " gave " , " a " , " car " , " to " , " his " , " son " , " on " ,
" Friday " ]
陈旧:
( detokenize [ " Mr. " , " Smith " , " gave " , " a " , " car " , " to " , " his " , " son " , " on " , " Friday " ])
" Mr. Smith gave a car to his son on Friday. "
理想情况下,s ==(ditokenize(tokenize s)),datokenization模型XML文件正在进行中,请让我知道您是否遇到了不正确用英语折叠的东西。
语音的一部分标记:
( pprint ( pos-tag ( tokenize " Mr. Smith gave a car to his son on Friday. " )))
([ " Mr. " " NNP " ]
[ " Smith " " NNP " ]
[ " gave " " VBD " ]
[ " a " " DT " ]
[ " car " " NN " ]
[ " to " " TO " ]
[ " his " " PRP$ " ]
[ " son " " NN " ]
[ " on " " IN " ]
[ " Friday. " " NNP " ])
名称查找:
( name-find ( tokenize " My name is Lee, not John. " ))
( "Lee" " John " )
Treebank-Chunking从一个标记的句子中分解和标记短语。一个值得注意的区别在于,它返回一个带有:短语和:标签键的结构列表,如下所示:
( pprint ( chunker ( pos-tag ( tokenize " The override system is meant to deactivate the accelerator when the brake pedal is pressed. " ))))
({ :phrase [ " The " " override " " system " ], :tag " NP " }
{ :phrase [ " is " " meant " " to " " deactivate " ], :tag " VP " }
{ :phrase [ " the " " accelerator " ], :tag " NP " }
{ :phrase [ " when " ], :tag " ADVP " }
{ :phrase [ " the " " brake " " pedal " ], :tag " NP " }
{ :phrase [ " is " " pressed " ], :tag " VP " })
对于短语:
( phrases ( chunker ( pos-tag ( tokenize " The override system is meant to deactivate the accelerator when the brake pedal is pressed. " ))))
([ " The " " override " " system " ] [ " is " " meant " " to " " deactivate " ] [ " the " " accelerator " ] [ " when " ] [ " the " " brake " " pedal " ] [ " is " " pressed " ])
只有字符串:
( phrase-strings ( chunker ( pos-tag ( tokenize " The override system is meant to deactivate the accelerator when the brake pedal is pressed. " ))))
( "The override system " " is meant to deactivate " " the accelerator " " when " " the brake pedal " " is pressed " )
文档分类:
有关更好的用法示例,请参见OpenNLP.Test.tools.Train。
( def doccat ( make-document-categorizer " my-doccat-model " ))
( doccat " This is some good text " )
" Happy "
在适用的情况下,在结果上可以作为元数据提供给定操作的概率OpenNLP供应。
( meta ( get-sentences " This is a sentence. This is also one. " ))
{ :probabilities ( 0.9999054310803004 0.9941126097177366 )}
( meta ( tokenize " This is a sentence. " ))
{ :probabilities ( 1.0 1.0 1.0 0.9956236737394807 1.0 )}
( meta ( pos-tag [ " This " " is " " a " " sentence " " . " ]))
{ :probabilities ( 0.9649410482478001 0.9982592902509803 0.9967282012835504 0.9952498677248117 0.9862225658078769 )}
( meta ( chunker ( pos-tag [ " This " " is " " a " " sentence " " . " ])))
{ :probabilities ( 0.9941248001899835 0.9878092935921453 0.9986106511439116 0.9972975733070356 0.9906377695586069 )}
( meta ( name-find [ " My " " name " " is " " John " ]))
{ :probabilities ( 0.9996272005494383 0.999999997485361 0.9999948113868132 0.9982291838206192 )}
您可以使用以下方式重新启用opennlp.nlp/*beam-size*
(默认值为3)。
( binding [*beam-size* 1 ]
( def pos-tag ( make-pos-tagger " models/en-pos-maxent.bin " )))
您可以重新启用opennlp.treebank/*advance-percentage*
(Treebank-Parser的默认值为0.95),请使用:
( binding [*advance-percentage* 0.80 ]
( def parser ( make-treebank-parser " parser-model/en-parser-chunking.bin " )))
注意:Treebank解析非常密集,请确保您的JVM有足够的内存(使用-XMX512M之类的内存),或者在使用Treebank Parser时,您的JVM可用。
由于它的复杂程度,Treebank解析获得了自己的部分。
请注意,Git Repo中没有一个Treebank-Parser模型包含,您将必须与OpenNLP项目分开下载。
创建它:
( def treebank-parser ( make-treebank-parser " parser-model/en-parser-chunking.bin " ))
要使用Treebank-parser,请通过whitespace分隔的令牌传递一系列句子(最好使用tokenize)
( treebank-parser [ " This is a sentence . " ])
[ " (TOP (S (NP (DT This)) (VP (VBZ is) (NP (DT a) (NN sentence))) (. .))) " ]
为了将Treebank-parser String转换为Clojure更容易执行的东西,请使用(make-tree ...)函数:
( make-tree ( first ( treebank-parser [ " This is a sentence . " ])))
{ :chunk { :chunk ({ :chunk { :chunk " This " , :tag DT}, :tag NP} { :chunk ({ :chunk " is " , :tag VBZ} { :chunk ({ :chunk " a " , :tag DT} { :chunk " sentence " , :tag NN}), :tag NP}), :tag VP} { :chunk " . " , :tag .}), :tag S}, :tag TOP}
这是数据结构分为更可读的格式:
{ :tag TOP
:chunk { :tag S
:chunk ({ :tag NP
:chunk { :tag DT
:chunk " This " }}
{ :tag VP
:chunk ({ :tag VBZ
:chunk " is " }
{ :tag NP
:chunk ({ :tag DT
:chunk " a " }
{ :tag NN
:chunk " sentence " })})}
{ :tag .
:chunk " . " })}}
希望这使它更清晰,是一张嵌套地图。如果其他人有任何建议以更好地表示此信息的方法,请随时给我发送电子邮件或补丁。
此时,Treebank解析被认为是Beta。
( use 'opennlp.tools.filters)
( pprint ( nouns ( pos-tag ( tokenize " Mr. Smith gave a car to his son on Friday. " ))))
([ " Mr. " " NNP " ]
[ " Smith " " NNP " ]
[ " car " " NN " ]
[ " son " " NN " ]
[ " Friday " " NNP " ])
( pprint ( verbs ( pos-tag ( tokenize " Mr. Smith gave a car to his son on Friday. " ))))
([ " gave " " VBD " ])
( use 'opennlp.tools.filters)
( pprint ( noun-phrases ( chunker ( pos-tag ( tokenize " The override system is meant to deactivate the accelerator when the brake pedal is pressed " )))))
({ :phrase [ " The " " override " " system " ], :tag " NP " }
{ :phrase [ " the " " accelerator " ], :tag " NP " }
{ :phrase [ " the " " brake " " pedal " ], :tag " NP " })
( pos-filter determiners #"^DT" )
#'user/determiners
( doc determiners)
-------------------------
user/determiners
([elements__52__auto__])
Given a list of pos-tagged elements, return only the determiners in a list.
( pprint ( determiners ( pos-tag ( tokenize " Mr. Smith gave a car to his son on Friday. " ))))
([ " a " " DT " ])
您还可以使用(块滤波器...)创建Treebank-Chunk过滤器
( chunk-filter fragments #"^FRAG$" )
( doc fragments)
-------------------------
opennlp.nlp/fragments
([elements__178__auto__])
Given a list of treebank-chunked elements, return only the fragments in a list.
有一些方法可以帮助您在标记方法时懒惰,具体取决于所需的操作,使用相应的方法:
#'opennlp.tools.lazy/lazy-get-sentences
#'opennlp.tools.lazy/lazy-tokenize
#'opennlp.tools.lazy/lazy-tag
#'opennlp.tools.lazy/lazy-chunk
#'opennlp.tools.lazy/sentence-seq
这是使用它们的方法:
( use 'opennlp.nlp)
( use 'opennlp.treebank)
( use 'opennlp.tools.lazy)
( def get-sentences ( make-sentence-detector " models/en-sent.bin " ))
( def tokenize ( make-tokenizer " models/en-token.bin " ))
( def pos-tag ( make-pos-tagger " models/en-pos-maxent.bin " ))
( def chunker ( make-treebank-chunker " models/en-chunker.bin " ))
( lazy-get-sentences [ " This body of text has three sentences. This is the first. This is the third. " " This body has only two. Here's the last one. " ] get-sentences)
; ; will lazily return:
([ " This body of text has three sentences. " " This is the first. " " This is the third. " ] [ " This body has only two. " " Here's the last one. " ])
( lazy-tokenize [ " This is a sentence. " " This is another sentence. " " This is the third. " ] tokenize)
; ; will lazily return:
([ " This " " is " " a " " sentence " " . " ] [ " This " " is " " another " " sentence " " . " ] [ " This " " is " " the " " third " " . " ])
( lazy-tag [ " This is a sentence. " " This is another sentence. " ] tokenize pos-tag)
; ; will lazily return:
(([ " This " " DT " ] [ " is " " VBZ " ] [ " a " " DT " ] [ " sentence " " NN " ] [ " . " " . " ]) ([ " This " " DT " ] [ " is " " VBZ " ] [ " another " " DT " ] [ " sentence " " NN " ] [ " . " " . " ]))
( lazy-chunk [ " This is a sentence. " " This is another sentence. " ] tokenize pos-tag chunker)
; ; will lazily return:
(({ :phrase [ " This " ], :tag " NP " } { :phrase [ " is " ], :tag " VP " } { :phrase [ " a " " sentence " ], :tag " NP " }) ({ :phrase [ " This " ], :tag " NP " } { :phrase [ " is " ], :tag " VP " } { :phrase [ " another " " sentence " ], :tag " NP " }))
可以随意使用懒惰的功能,但是我仍然不是在布局上设置的100%,因此将来可能会改变。 (也许将它们链接在一起,因此不是一系列句子,它看起来像(懒惰的厨师(懒惰的标签(懒惰的tokenize)(懒惰的句子... ...))))。
使用opennlp.tools.lazy/stone-seq:从文件中生成懒惰的句子序列:
( with-open [rdr ( clojure.java.io/reader " /tmp/bigfile " )]
( let [sentences ( sentence-seq rdr get-sentences)]
; ; process your lazy seq of sentences however you desire
( println " first 5 sentences: " )
( clojure.pprint/pprint ( take 5 sentences))))
有代码允许为每个工具进行培训模型。请参阅训练中的文档。
版权(C)2010 Matthew Lee Hinman
根据Eclipse公共许可分发,与Clojure使用相同。请参阅文件复制。