Contextual word checker for better suggestions
It is essential to understand that identifying whether a candidate is a spelling error is a big task.
Spelling errors are broadly classified as non- word errors (NWE) and real word errors (RWE). If the misspelt string is a valid word in the language, then it is called an RWE, else it is an NWE.
-- Monojit Choudhury et. al. (2007)
This package currently focuses on Out of Vocabulary (OOV) word or non-word error (NWE) correction using BERT model. The idea of using BERT was to use the context when correcting OOV. To improve this package, I would like to extend the functionality to identify RWE, optimising the package, and improving the documentation.
The package can be installed using pip. You would require python 3.6+
pip install contextualSpellCheck
Note: For use in other languages check examples
folder.
>>> import contextualSpellCheck
>>> import spacy
>>> nlp = spacy.load("en_core_web_sm")
>>>
>>> ## We require NER to identify if a token is a PERSON
>>> ## also require parser because we use `Token.sent` for context
>>> nlp.pipe_names
['tok2vec', 'tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer']
>>> contextualSpellCheck.add_to_pipe(nlp)
>>> nlp.pipe_names
['tok2vec', 'tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer', 'contextual spellchecker']
>>>
>>> doc = nlp('Income was $9.4 milion compared to the prior year of $2.7 milion.')
>>> doc._.outcome_spellCheck
'Income was $9.4 million compared to the prior year of $2.7 million.'
Or you can add to spaCy pipeline manually!
>>> import spacy
>>> import contextualSpellCheck
>>>
>>> nlp = spacy.load("en_core_web_sm")
>>> nlp.pipe_names
['tok2vec', 'tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer']
>>> # You can pass the optional parameters to the contextualSpellCheck
>>> # eg. pass max edit distance use config={"max_edit_dist": 3}
>>> nlp.add_pipe("contextual spellchecker")
<contextualSpellCheck.contextualSpellCheck.ContextualSpellCheck object at 0x1049f82b0>
>>> nlp.pipe_names
['tok2vec', 'tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer', 'contextual spellchecker']
>>>
>>> doc = nlp("Income was $9.4 milion compared to the prior year of $2.7 milion.")
>>> print(doc._.performed_spellCheck)
True
>>> print(doc._.outcome_spellCheck)
Income was $9.4 million compared to the prior year of $2.7 million.
After adding contextual spellchecker
in the pipeline, you use the pipeline normally. The spell check suggestions and other data can be accessed using extensions.
>>> doc = nlp(u'Income was $9.4 milion compared to the prior year of $2.7 milion.')
>>>
>>> # Doc Extention
>>> print(doc._.contextual_spellCheck)
True
>>> print(doc._.performed_spellCheck)
True
>>> print(doc._.suggestions_spellCheck)
{milion: 'million', milion: 'million'}
>>> print(doc._.outcome_spellCheck)
Income was $9.4 million compared to the prior year of $2.7 million.
>>> print(doc._.score_spellCheck)
{milion: [('million', 0.59422), ('billion', 0.24349), (',', 0.08809), ('trillion', 0.01835), ('Million', 0.00826), ('%', 0.00672), ('##M', 0.00591), ('annually', 0.0038), ('##B', 0.00205), ('USD', 0.00113)], milion: [('billion', 0.65934), ('million', 0.26185), ('trillion', 0.05391), ('##M', 0.0051), ('Million', 0.00425), ('##B', 0.00268), ('USD', 0.00153), ('##b', 0.00077), ('millions', 0.00059), ('%', 0.00041)]}
>>>
>>> # Token Extention
>>> print(doc[4]._.get_require_spellCheck)
True
>>> print(doc[4]._.get_suggestion_spellCheck)
'million'
>>> print(doc[4]._.score_spellCheck)
[('million', 0.59422), ('billion', 0.24349), (',', 0.08809), ('trillion', 0.01835), ('Million', 0.00826), ('%', 0.00672), ('##M', 0.00591), ('annually', 0.0038), ('##B', 0.00205), ('USD', 0.00113)]
>>>
>>> # Span Extention
>>> print(doc[2:6]._.get_has_spellCheck)
True
>>> print(doc[2:6]._.score_spellCheck)
{$: [], 9.4: [], milion: [('million', 0.59422), ('billion', 0.24349), (',', 0.08809), ('trillion', 0.01835), ('Million', 0.00826), ('%', 0.00672), ('##M', 0.00591), ('annually', 0.0038), ('##B', 0.00205), ('USD', 0.00113)], compared: []}
To make the usage easy, contextual spellchecker
provides custom spacy extensions which your code can consume. This makes it easier for the user to get the desired data. contextualSpellCheck provides extensions on the doc
, span
and token
level. The below tables summarise the extensions.
spaCy.Doc
level extensionsExtension | Type | Description | Default |
---|---|---|---|
doc._.contextual_spellCheck | Boolean |
To check whether contextualSpellCheck is added as extension | True |
doc._.performed_spellCheck | Boolean |
To check whether contextualSpellCheck identified any misspells and performed correction | False |
doc._.suggestions_spellCheck | {Spacy.Token:str} |
if corrections are performed, it returns the mapping of misspell token (spaCy.Token ) with suggested word(str ) |
{} |
doc._.outcome_spellCheck | str |
corrected sentence(str ) as output |
"" |
doc._.score_spellCheck | {Spacy.Token:List(str,float)} |
if corrections are identified, it returns the mapping of misspell token (spaCy.Token ) with suggested words(str ) and probability of that correction |
None |
spaCy.Span
level extensionsExtension | Type | Description | Default |
---|---|---|---|
span._.get_has_spellCheck | Boolean |
To check whether contextualSpellCheck identified any misspells and performed correction in this span | False |
span._.score_spellCheck | {Spacy.Token:List(str,float)} |
if corrections are identified, it returns the mapping of misspell token (spaCy.Token ) with suggested words(str ) and probability of that correction for tokens in this span
|
{spaCy.Token: []} |
spaCy.Token
level extensionsExtension | Type | Description | Default |
---|---|---|---|
token._.get_require_spellCheck | Boolean |
To check whether contextualSpellCheck identified any misspells and performed correction on this token
|
False |
token._.get_suggestion_spellCheck | str |
if corrections are performed, it returns the suggested word(str ) |
"" |
token._.score_spellCheck | [(str,float)] |
if corrections are identified, it returns suggested words(str ) and probability(float ) of that correction |
[] |
At present, there is a simple GET API to get you started. You can run the app in your local and play with it.
Query: You can use the endpoint http://127.0.0.1:5000/?query=YOUR-QUERY Note: Your browser can handle the text encoding
GET Request: http://localhost:5000/?query=Income%20was%20$9.4%20milion%20compared%20to%20the%20prior%20year%20of%20$2.7%20milion.
Response:
{
"success": true,
"input": "Income was $9.4 milion compared to the prior year of $2.7 milion.",
"corrected": "Income was $9.4 milion compared to the prior year of $2.7 milion.",
"suggestion_score": {
"milion": [
[
"million",
0.59422
],
[
"billion",
0.24349
],
...
],
"milion:1": [
[
"billion",
0.65934
],
[
"million",
0.26185
],
...
]
}
}
candidateRanking
If you like the project, please ⭑ the project and show your support! Also, if you feel, the current behaviour is not as expected, please feel free to raise an issue. If you can help with any of the above tasks, please open a PR with necessary changes to documentation and tests.
If you are using contextualSpellCheck in your academic work, please consider citing the library using the below BibTex entry:
@misc{Goel_Contextual_Spell_Check_2021,
author = {Goel, Rajat},
doi = {10.5281/zenodo.4642379},
month = {3},
title = {{Contextual Spell Check}},
url = {https://github.com/R1j1t/contextualSpellCheck},
year = {2021}
}
Below are some of the projects/work I referred to while developing this package