Merge pull request #202 from UberWriter/legal

Fix some legal/license related issues
ui^2
somas95 2020-03-12 01:42:27 +01:00 committed by GitHub
commit e6e8655c2a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
23 changed files with 522 additions and 2529 deletions

View File

View File

@ -1,7 +1,7 @@
# UberwriterAutoCorrect
# The Uberwriter Auto Correct is a auto correction
# mechanism to prevent stupid typos
# import presage
# CURRENTLY DISABLED
import os
import pickle

View File

@ -14,6 +14,24 @@ AUTHOR
COPYING
Copyright (C) 2010 Stuart Rackham. Free use of this software is
granted under the terms of the MIT License.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
"""
import os

View File

@ -0,0 +1,503 @@
### GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
### Preamble
The licenses for most software are designed to take away your freedom
to share and change it. By contrast, the GNU General Public Licenses
are intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations
below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that there
is no warranty for the free library. Also, if the library is modified
by someone else and passed on, the recipients should know that what
they have is not the original version, so that the original author's
reputation will not be affected by problems that might be introduced
by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using a
shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it
becomes a de-facto standard. To achieve this, non-free programs must
be allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
### TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
**0.** This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License"). Each
licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control
compilation and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does and
what the program that uses the Library does.
**1.** You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a
fee.
**2.** You may modify your copy or copies of the Library or any
portion of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
- **a)** The modified work must itself be a software library.
- **b)** You must cause the files modified to carry prominent
notices stating that you changed the files and the date of
any change.
- **c)** You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
- **d)** If a facility in the modified Library refers to a function
or a table of data to be supplied by an application program that
uses the facility, other than as an argument passed when the
facility is invoked, then you must make a good faith effort to
ensure that, in the event an application does not supply such
function or table, the facility still operates, and performs
whatever part of its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of
the application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
**3.** You may opt to apply the terms of the ordinary GNU General
Public License instead of this License to a given copy of the Library.
To do this, you must alter all the notices that refer to this License,
so that they refer to the ordinary GNU General Public License, version
2, instead of to this License. (If a newer version than version 2 of
the ordinary GNU General Public License has appeared, then you can
specify that version instead if you wish.) Do not make any other
change in these notices.
Once this change is made in a given copy, it is irreversible for that
copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of the
Library into a program that is not a library.
**4.** You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy from
a designated place, then offering equivalent access to copy the source
code from the same place satisfies the requirement to distribute the
source code, even though third parties are not compelled to copy the
source along with the object code.
**5.** A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a work,
in isolation, is not a derivative work of the Library, and therefore
falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License. Section
6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data structure
layouts and accessors, and small macros and small inline functions
(ten lines or less in length), then the use of the object file is
unrestricted, regardless of whether it is legally a derivative work.
(Executables containing this object code plus portions of the Library
will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
**6.** As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a work
containing portions of the Library, and distribute that work under
terms of your choice, provided that the terms permit modification of
the work for the customer's own use and reverse engineering for
debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
- **a)** Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood that
the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
- **b)** Use a suitable shared library mechanism for linking with
the Library. A suitable mechanism is one that (1) uses at run time
a copy of the library already present on the user's computer
system, rather than copying library functions into the executable,
and (2) will operate properly with a modified version of the
library, if the user installs one, as long as the modified version
is interface-compatible with the version that the work was
made with.
- **c)** Accompany the work with a written offer, valid for at least
three years, to give the same user the materials specified in
Subsection 6a, above, for a charge no more than the cost of
performing this distribution.
- **d)** If distribution of the work is made by offering access to
copy from a designated place, offer equivalent access to copy the
above specified materials from the same place.
- **e)** Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
**7.** You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
- **a)** Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other
library facilities. This must be distributed under the terms of
the Sections above.
- **b)** Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
**8.** You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
**9.** You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
**10.** Each time you redistribute the Library (or any work based on
the Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
**11.** If, as a consequence of a court judgment or allegation of
patent infringement or for any other reason (not limited to patent
issues), conditions are imposed on you (whether by court order,
agreement or otherwise) that contradict the conditions of this
License, they do not excuse you from the conditions of this License.
If you cannot distribute so as to satisfy simultaneously your
obligations under this License and any other pertinent obligations,
then as a consequence you may not distribute the Library at all. For
example, if a patent license would not permit royalty-free
redistribution of the Library by all those who receive copies directly
or indirectly through you, then the only way you could satisfy both it
and this License would be to refrain entirely from distribution of the
Library.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply, and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
**12.** If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
**13.** The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time. Such
new versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
**14.** If you wish to incorporate parts of the Library into other
free programs whose distribution conditions are incompatible with
these, write to the author to ask for permission. For software which
is copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
**NO WARRANTY**
**15.** BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
**16.** IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
### END OF TERMS AND CONDITIONS
### How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change. You can do so by permitting
redistribution under these terms (or, alternatively, under the terms
of the ordinary General Public License).
To apply these terms, attach the following notices to the library. It
is safest to attach them to the start of each source file to most
effectively convey the exclusion of warranty; and each file should
have at least the "copyright" line and a pointer to where the full
notice is found.
one line to give the library's name and an idea of what it does.
Copyright (C) year name of author
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper
mail.
You should also get your employer (if you work as a programmer) or
your school, if any, to sign a "copyright disclaimer" for the library,
if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in
the library `Frob' (a library for tweaking knobs) written
by James Random Hacker.
signature of Ty Coon, 1 April 1990
Ty Coon, President of Vice
That's all there is to it!

@ -1 +0,0 @@
Subproject commit 264ee48c1fe05ef2198697e88f34bae581654caa

View File

@ -1 +0,0 @@
0.1.3

View File

@ -1,34 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from . import predictor
from . import context_tracker
class Pressagio:
def __init__(self, callback, config, dbconnection = None):
self.config = config
self.callback = callback
self.predictor_registry = pressagio.predictor.PredictorRegistry(
self.config, dbconnection)
self.context_tracker = pressagio.context_tracker.ContextTracker(
self.config, self.predictor_registry, callback)
self.predictor_activator = pressagio.predictor.PredictorActivator(
self.config, self.predictor_registry, self.context_tracker)
self.predictor_activator.combination_policy = "meritocracy"
def predict(self):
multiplier = 1
predictions = self.predictor_activator.predict(multiplier)
return [p.word for p in predictions]
def close_database(self):
self.predictor_registry.close_database()

View File

@ -1,37 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
"""
Base class for callbacks.
"""
from __future__ import absolute_import, unicode_literals
class Callback(object):
"""
Base class for callbacks.
"""
def __init__(self):
self.stream = ""
self.empty = ""
def past_stream(self):
return self.stream
def future_stream(self):
return self.empty
def update(self, character):
if character == "\b" and len(stream) > 0:
self.stream[:-1]
else:
self.stream += character

View File

@ -1,34 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import unicodedata
blankspaces = " \f\n\r\t\v…"
separators = "`~!@#$%^&*()_-+=\\|]}[{'\";:/?.>,<†„“।॥ו´י0123456789"
def first_word_character(string):
for i, ch in enumerate(string):
if is_word_character(ch):
return i
return -1
def last_word_character(string):
result = first_word_character(string[::-1])
if result == -1:
return -1
return len(string) - result - 1
def is_word_character(char):
# check for letter category
if unicodedata.category(char)[0] == "L":
return True
return False

View File

@ -1,64 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
"""
Combiner classes to merge results from several predictors.
"""
from __future__ import absolute_import, unicode_literals
import abc
from . import predictor
class Combiner(object):
"""
Base class for all combiners
"""
__metaclass__ = abc.ABCMeta
def __init__(self):
pass
def filter(self, prediction):
seen_tokens = set()
result = predictor.Prediction()
for i, suggestion in enumerate(prediction):
token = suggestion.word
if token not in seen_tokens:
for j in range(i+1, len(prediction)):
if token == prediction[j].word:
# TODO: interpolate here?
suggestion.probability += prediction[j].probability
if suggestion.probability > \
predictor.MAX_PROBABILITY:
suggestion.probability = \
MAX_PROBABILITY
seen_tokens.add(token)
result.add_suggestion(suggestion)
return result
@abc.abstractmethod
def combine(self):
raise NotImplementedError("Method must be implemented")
class MeritocracyCombiner(Combiner):
def __init__(self):
pass
def combine(self, predictions):
result = predictor.Prediction()
for prediction in predictions:
for suggestion in prediction:
result.add_suggestion(suggestion)
return(self.filter(result))

View File

@ -1,177 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
"""
Class for context tracker.
"""
from __future__ import absolute_import, unicode_literals
import copy
import io
from . import character
from . import observer
from . import tokenizer
DEFAULT_SLIDING_WINDOW_SIZE = 80
class InvalidCallbackException(Exception): pass
class ContextChangeDetector(object):
def __init__(self, lowercase):
self.lowercase = lowercase
self.sliding_windows_size = DEFAULT_SLIDING_WINDOW_SIZE
self.sliding_window = ""
def update_sliding_window(self, string):
if len(string) <= self.sliding_windows_size:
self.sliding_window = string
else:
self.sliding_window = string[:-self.sliding_windows_size]
def context_change(self, past_stream):
# rename for clarity
prev_context = self.sliding_window
curr_context = past_stream
if len(prev_context) == 0:
if len(curr_context) == 0:
return False
else:
return True
ctx_idx = curr_context.rfind(prev_context)
if ctx_idx == -1:
return True
remainder = curr_context[ctx_idx + len(prev_context):]
idx = character.last_word_character(remainder)
if idx == -1:
if len(remainder) == 0:
return False
last_char = curr_context[ctx_idx + len(prev_context) - 1]
if character.is_word_character(last_char):
return False
else:
return True
if idx == len(remainder) - 1:
return False
return True
def change(self, past_stream):
# rename for clarity
prev_context = self.sliding_window
curr_context = past_stream
if len(prev_context) == 0:
return past_stream
ctx_idx = curr_context.rfind(prev_context)
if ctx_idx == -1:
return past_stream
result = curr_context[ctx_idx + len(prev_context):]
if (self.context_change(past_stream)):
sliding_window_stream = self.sliding_window
r_tok = tokenizer.ReverseTokenizer(sliding_window_stream)
r_tok.lowercase = self.lowercase
first_token = r_tok.next_token()
if not len(first_token) == 0:
result = first_token + result
return result
class ContextTracker(object): #observer.Observer
"""
Tracks the current context.
"""
def __init__(self, config, predictor_registry, callback):
#self.dispatcher = observer.Dispatcher(self)
self.config = config
self.lowercase = self.config.getboolean("ContextTracker", "lowercase_mode")
self.registry = predictor_registry
if callback:
self.callback = callback
else:
raise InvalidCallbackException
self.context_change_detector = ContextChangeDetector(self.lowercase)
self.registry.context_tracker = self
self.sliding_windows_size = DEFAULT_SLIDING_WINDOW_SIZE
def context_change(self):
return self.context_change_detector.context_change(self.past_stream())
def update_context(self):
change = self.context_change_detector.change(self.past_stream())
tok = tokenizer.ForwardTokenizer(change)
tok.lowercase = self.lowercase
change_tokens = []
while(tok.has_more_tokens()):
token = tok.next_token()
change_tokens.append(token)
if len(change_tokens) != 0:
# remove prefix (partially entered token or empty token)
change_tokens.pop()
for predictor in self.predictor_registry:
predictor.learn(change_tokens)
self.context_change_detector.update_sliding_window(self.past_stream())
def prefix(self):
self.token(0)
def token(self, index):
past_string_stream = self.past_stream()
string_io = io.StringIO(past_string_stream)
tok = tokenizer.ReverseTokenizer(string_io)
tok.lowercase = self.lowercase
i = 0
while tok.has_more_tokens() and i <= index:
token = tok.next_token()
i += 1
if i <= index:
token = ""
return token
def extra_token_to_learn(self, index, change):
return self.token(index + len(change))
def future_stream(self):
return self.callback.future_stream()
def past_stream(self):
return self.callback.past_stream()
def is_completion_valid(self, completion):
prefix = self.prefix().lower()
if prefix in completion:
return True
return False
def __repr__(self):
return self.callback.past_stream + "<|>" + self.callback.future_stream \
+ "\n"
# def update(self, observable):
# self.dispatcher.dispatch(observable)

View File

@ -1,745 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2001-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://www.cidles.eu/ltll/poio>
# For license information, see LICENSE
"""
Classes to connect to databases.
"""
from __future__ import absolute_import, unicode_literals
import abc
import sqlite3
import time
import re
import regex
try:
import psycopg2
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)
except ImportError:
pass
re_escape_singlequote = re.compile("'")
def _sqlite3_regex(expr, item):
return (not (not regex.search(expr, item)))
class DatabaseConnector(object):
"""
Base class for all database connectors.
"""
__metaclass__ = abc.ABCMeta
def __init__(self, dbname, cardinality = 1):
"""
Constructor of the base class DababaseConnector.
Parameters
----------
dbname : str
path to the database file or database name
cardinality : int
default cardinality for n-grams
"""
print("asjdas jdlkasj ljsa kdj lsakdj lk")
self.cardinality = cardinality
self.dbname = dbname
self.lowercase = False
self.normalize = False
def create_ngram_table(self, cardinality):
"""
Creates a table for n-gram of a give cardinality. The table name is
constructed from this parameter, for example for cardinality `2` there
will be a table `_2_gram` created.
Parameters
----------
cardinality : int
The cardinality to create a table for.
"""
query = "CREATE TABLE IF NOT EXISTS _{0}_gram (".format(cardinality)
unique = ""
for i in reversed(range(cardinality)):
if i != 0:
unique += "word_{0}, ".format(i)
query += "word_{0} TEXT, ".format(i)
else:
unique += "word"
query += "word TEXT, count INTEGER, UNIQUE({0}) );".format(
unique)
self.execute_sql(query)
def delete_ngram_table(self, cardinality):
"""
Deletes the table for n-gram of a give cardinality. The table name is
constructed from this parameter, for example for cardinality `2` there
will be a table `_2_gram` deleted.
Parameters
----------
cardinality : int
The cardinality of the table to delete.
"""
query = "DROP TABLE IF EXISTS _{0}_gram;".format(cardinality)
self.execute_sql(query)
def create_index(self, cardinality):
"""
Create an index for the table with the given cardinality.
Parameters
----------
cardinality : int
The cardinality to create a index for.
"""
for i in reversed(range(cardinality)):
if i != 0:
query = "CREATE INDEX idx_{0}_gram_{1} ON _{0}_gram(word_{1});".format(cardinality, i)
self.execute_sql(query)
def delete_index(self, cardinality):
"""
Delete index for the table with the given cardinality.
Parameters
----------
cardinality : int
The cardinality of the index to delete.
"""
for i in reversed(range(cardinality)):
if i != 0:
query = "DROP INDEX IF EXISTS idx_{0}_gram_{1};".format(
cardinality, i)
self.execute_sql(query)
def create_unigram_table(self):
"""
Creates a table for n-grams of cardinality 1.
"""
self.create_ngram_table(1)
def create_bigram_table(self):
"""
Creates a table for n-grams of cardinality 2.
"""
self.create_ngram_table(2)
def create_trigram_table(self):
"""
Creates a table for n-grams of cardinality 3.
"""
self.create_ngram_table(3)
def ngrams(self, with_counts=False):
"""
Returns all ngrams that are in the table.
Parameters
----------
None
Returns
-------
ngrams : generator
A generator for ngram tuples.
"""
query = "SELECT "
for i in reversed(range(self.cardinality)):
if i != 0:
query += "word_{0}, ".format(i)
elif i == 0:
query += "word"
if with_counts:
query += ", count"
query += " FROM _{0}_gram;".format(self.cardinality)
print(query)
result = self.execute_sql(query)
for row in result:
yield tuple(row)
def unigram_counts_sum(self):
query = "SELECT SUM(count) from _1_gram;"
result = self.execute_sql(query)
print(result, query)
return self._extract_first_integer(result)
def ngram_count(self, ngram):
"""
Gets the count for a given ngram from the database.
Parameters
----------
ngram : iterable of str
A list, set or tuple of strings.
Returns
-------
count : int
The count of the ngram.
"""
query = "SELECT count FROM _{0}_gram".format(len(ngram))
query += self._build_where_clause(ngram)
query += ";"
result = self.execute_sql(query)
return self._extract_first_integer(result)
def ngram_like_table(self, ngram, limit = -1):
print("NGRAM LIKE TABLE!\n\n\n")
query = "SELECT {0} FROM _{1}_gram {2} ORDER BY count DESC".format(
self._build_select_like_clause(len(ngram)), len(ngram),
self._build_where_like_clause(ngram))
print(query)
if limit < 0:
query += ";"
else:
query += " LIMIT {0};".format(limit)
return self.execute_sql(query)
def ngram_like_table_filtered(self, ngram, filter, limit = -1):
pass
def increment_ngram_count(self, ngram):
pass
def insert_ngram(self, ngram, count):
"""
Inserts a given n-gram with count into the database.
Parameters
----------
ngram : iterable of str
A list, set or tuple of strings.
count : int
The count for the given n-gram.
"""
query = "INSERT INTO _{0}_gram {1};".format(len(ngram),
self._build_values_clause(ngram, count))
self.execute_sql(query)
def update_ngram(self, ngram, count):
"""
Updates a given ngram in the database. The ngram has to be in the
database, otherwise this method will stop with an error.
Parameters
----------
ngram : iterable of str
A list, set or tuple of strings.
count : int
The count for the given n-gram.
"""
query = "UPDATE _{0}_gram SET count = {1}".format(len(ngram), count)
query += self._build_where_clause(ngram)
query += ";"
self.execute_sql(query)
def remove_ngram(self, ngram):
"""
Removes a given ngram from the databae. The ngram has to be in the
database, otherwise this method will stop with an error.
Parameters
----------
ngram : iterable of str
A list, set or tuple of strings.
"""
query = "DELETE FROM _{0}_gram".format(len(ngram))
query += self._build_where_clause(ngram)
query += ";"
self.execute_sql(query)
def open_database(self):
raise NotImplementedError("Method must be implemented")
def close_database(self):
raise NotImplementedError("Method must be implemented")
def execute_sql(self):
raise NotImplementedError("Method must be implemented")
############################################### Private methods
def _build_values_clause(self, ngram, count):
ngram_escaped = []
for n in ngram:
ngram_escaped.append(re_escape_singlequote.sub("''", n))
values_clause = "VALUES('"
values_clause += "', '".join(ngram_escaped)
values_clause += "', {0})".format(count)
return values_clause
def _build_where_clause(self, ngram):
where_clause = " WHERE"
for i in range(len(ngram)):
n = re_escape_singlequote.sub("''", ngram[i])
if i < (len(ngram) - 1):
where_clause += " word_{0} = '{1}' AND".format(
len(ngram) - i - 1, n)
else:
pattern = '(?:^%s){e<=%d}' % (n, 2)
where_clause += " word = '{0}'".format(n)
print(where_clause)
return where_clause
def _build_select_like_clause(self, cardinality):
result = ""
for i in reversed(range(cardinality)):
if i != 0:
result += "word_{0}, ". format(i)
else:
result += "word, count"
return result
def _build_where_like_clause(self, ngram):
where_clause = " WHERE"
for i in range(len(ngram)):
if i < (len(ngram) - 1):
where_clause += " word_{0} = '{1}' AND".format(
len(ngram) - i - 1, ngram[i])
else:
pattern = '(?:%s){e<=%d}' % (ngram[-1], 0)
where_clause += " (word regexp '%s')" % pattern
return where_clause
def _extract_first_integer(self, table):
count = 0
if len(table) > 0:
if len(table[0]) > 0:
count = int(table[0][0])
if not count > 0:
count = 0
return count
class SqliteDatabaseConnector(DatabaseConnector):
"""
Database connector for sqlite databases.
"""
def __init__(self, dbname, cardinality = 1):
"""
Constructor for the sqlite database connector.
Parameters
----------
dbname : str
path to the database file
cardinality : int
default cardinality for n-grams
"""
DatabaseConnector.__init__(self, dbname, cardinality)
self.con = None
self.open_database()
def commit(self):
"""
Sends a commit to the database.
"""
self.con.commit()
def open_database(self):
"""
Opens the sqlite database.
"""
self.con = sqlite3.connect(self.dbname)
self.con.create_function("regexp", 2, _sqlite3_regex)
def close_database(self):
"""
Closes the sqlite database.
"""
if self.con:
self.con.close()
def execute_sql(self, query):
"""
Executes a given query string on an open sqlite database.
"""
c = self.con.cursor()
c.execute(query)
result = c.fetchall()
return result
class PostgresDatabaseConnector(DatabaseConnector):
"""
Database connector for postgres databases.
"""
def __init__(self, dbname, cardinality = 1, host = "localhost", port = 5432,
user = "postgres", password = None, connection = None):
"""
Constructor for the postgres database connector.
Parameters
----------
dbname : str
the database name
cardinality : int
default cardinality for n-grams
host : str
hostname of the postgres database
port : int
port number of the postgres database
user : str
user name for the postgres database
password: str
user password for the postgres database
connection : connection
an open database connection
"""
DatabaseConnector.__init__(self, dbname, cardinality)
self.con = connection
self.host = host
self.port = port
self.user = user
self.password = password
def create_database(self):
"""
Creates an empty database if not exists.
"""
if not self._database_exists():
con = psycopg2.connect(host=self.host, database="postgres",
user=self.user, password=self.password, port=self.port)
con.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
query = "CREATE DATABASE {0};".format(self.dbname)
c = con.cursor()
c.execute(query)
con.close()
if self.normalize:
self.open_database()
query = "CREATE EXTENSION IF NOT EXISTS \"plperlu\";"
self.execute_sql(query)
# query = """CREATE OR REPLACE FUNCTION normalize(str text)
#RETURNS text
#AS $$
#import unicodedata
#return ''.join(c for c in unicodedata.normalize('NFKD', str)
#if unicodedata.category(c) != 'Mn')
#$$ LANGUAGE plpython3u IMMUTABLE;"""
# query = """CREATE OR REPLACE FUNCTION normalize(mystr text)
# RETURNS text
# AS $$
# from unidecode import unidecode
# return unidecode(mystr.decode("utf-8"))
# $$ LANGUAGE plpythonu IMMUTABLE;"""
query = """CREATE OR REPLACE FUNCTION normalize(text)
RETURNS text
AS $$
use Text::Unidecode;
return unidecode(shift);
$$ LANGUAGE plperlu IMMUTABLE;"""
self.execute_sql(query)
self.commit()
self.close_database()
def reset_database(self):
"""
Re-create an empty database.
"""
if self._database_exists():
con = psycopg2.connect(host=self.host, database="postgres",
user=self.user, password=self.password, port=self.port)
con.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
query = "DROP DATABASE {0};".format(self.dbname)
c = con.cursor()
c.execute(query)
con.close()
self.create_database()
def create_index(self, cardinality):
"""
Create an index for the table with the given cardinality.
Parameters
----------
cardinality : int
The cardinality to create a index for.
"""
DatabaseConnector.create_index(self, cardinality)
query = "CREATE INDEX idx_{0}_gram_varchar ON _{0}_gram(word varchar_pattern_ops);".format(cardinality)
self.execute_sql(query)
if self.lowercase:
for i in reversed(range(cardinality)):
if i != 0:
query = "CREATE INDEX idx_{0}_gram_{1}_lower ON _{0}_gram(LOWER(word_{1}));".format(cardinality, i)
self.execute_sql(query)
if self.normalize:
query = "CREATE INDEX idx_{0}_gram_lower_normalized_varchar ON _{0}_gram(NORMALIZE(LOWER(word)) varchar_pattern_ops);".format(cardinality)
self.execute_sql(query)
else:
query = "CREATE INDEX idx_{0}_gram_lower_varchar ON _{0}_gram(LOWER(word) varchar_pattern_ops);".format(cardinality)
self.execute_sql(query)
elif self.normalize:
query = "CREATE INDEX idx_{0}_gram_normalized_varchar ON _{0}_gram(NORMALIZE(word) varchar_pattern_ops);".format(cardinality)
self.execute_sql(query)
def delete_index(self, cardinality):
"""
Delete index for the table with the given cardinality.
Parameters
----------
cardinality : int
The cardinality of the index to delete.
"""
DatabaseConnector.delete_index(self, cardinality)
query = "DROP INDEX IF EXISTS idx_{0}_gram_varchar;".format(cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_normalized_varchar;".format(
cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_lower_varchar;".format(
cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_lower_normalized_varchar;".\
format(cardinality)
self.execute_sql(query)
for i in reversed(range(cardinality)):
if i != 0:
query = "DROP INDEX IF EXISTS idx_{0}_gram_{1}_lower;".format(
cardinality, i)
self.execute_sql(query)
def commit(self):
"""
Sends a commit to the database.
"""
self.con.commit()
def open_database(self):
"""
Opens the sqlite database.
"""
if not self.con:
try:
self.con = psycopg2.connect(host=self.host,
database=self.dbname, user=self.user,
password=self.password, port=self.port)
except psycopg2.Error as e:
print("Error while opening database:")
print(e.pgerror)
def close_database(self):
"""
Closes the sqlite database.
"""
if self.con:
self.con.close()
self.con = None
def execute_sql(self, query):
"""
Executes a given query string on an open postgres database.
"""
c = self.con.cursor()
c.execute(query)
result = []
if c.rowcount > 0:
try:
result = c.fetchall()
except psycopg2.ProgrammingError:
pass
return result
############################################### Private methods
def _database_exists(self):
"""
Check if the database exists.
"""
con = psycopg2.connect(host=self.host, database="postgres",
user=self.user, password=self.password, port=self.port)
query_check = "select datname from pg_catalog.pg_database"
query_check += " where datname = '{0}';".format(self.dbname)
c = con.cursor()
c.execute(query_check)
result = c.fetchall()
if len(result) > 0:
return True
return False
def _build_where_like_clause(self, ngram):
where_clause = " WHERE"
for i in range(len(ngram)):
if i < (len(ngram) - 1):
if self.lowercase:
where_clause += " LOWER(word_{0}) = LOWER('{1}') AND".format(
len(ngram) - i - 1, ngram[i])
else:
where_clause += " word_{0} = '{1}' AND".format(
len(ngram) - i - 1, ngram[i])
else:
if ngram[-1] != "":
if self.lowercase:
if self. normalize:
where_clause += " NORMALIZE(LOWER(word)) LIKE NORMALIZE(LOWER('{0}%'))".format(ngram[-1])
else:
where_clause += " LOWER(word) LIKE LOWER('{0}%')".format(ngram[-1])
elif self.normalize:
where_clause += " NORMALIZE(word) LIKE NORMALIZE('{0}%')".format(ngram[-1])
else:
where_clause += " word LIKE '{0}%'".format(ngram[-1])
else:
# remove the " AND"
where_clause = where_clause[:-4]
return where_clause
#################################################### Functions
def insert_ngram_map_sqlite(ngram_map, ngram_size, outfile, append=False,
create_index=False):
sql = SqliteDatabaseConnector(outfile, ngram_size)
sql.create_ngram_table(ngram_size)
for ngram, count in ngram_map.items():
if append:
old_count = sql.ngram_count(ngram)
if old_count > 0:
sql.update_ngram(ngram, old_count + count)
else:
sql.insert_ngram(ngram, count)
else:
sql.insert_ngram(ngram, count)
sql.commit()
if create_index and not append:
sql.create_index(ngram_size)
sql.close_database()
def insert_ngram_map_postgres(ngram_map, ngram_size, dbname, append=False,
create_index=False, host = "localhost", port = 5432, user = "postgres",
password = None, lowercase = False, normalize = False):
sql = PostgresDatabaseConnector(dbname, ngram_size, host, port, user,
password)
sql.lowercase = lowercase
sql.normalize = normalize
sql.create_database()
sql.open_database()
if not append:
sql.delete_index(ngram_size)
sql.delete_ngram_table(ngram_size)
sql.create_ngram_table(ngram_size)
for ngram, count in ngram_map.items():
if append:
old_count = sql.ngram_count(ngram)
if old_count > 0:
sql.update_ngram(ngram, old_count + count)
else:
sql.insert_ngram(ngram, count)
else:
sql.insert_ngram(ngram, count)
sql.commit()
if create_index and not append:
sql.create_index(ngram_size)
sql.commit()
sql.close_database()
def _filter_ngrams(sql, dictionary):
for ngram in sql.ngrams():
delete_ngram = False
for word in ngram:
if not word in dictionary:
delete_ngram = True
if delete_ngram:
sql.remove_ngram(ngram)
def filter_ngrams_sqlite(dictionary, ngram_size, outfile):
sql = SqliteDatabaseConnector(outfile, ngram_size)
_filter_ngrams(sql, dictionary)
sql.commit()
sql.close_database()
def filter_ngrams_postgres(dictionary, ngram_size, dbname, host = "localhost",
port = 5432, user = "postgres", password = None):
sql = PostgresDatabaseConnector(dbname, ngram_size, host, port, user,
password)
sql.open_database()
_filter_ngrams(sql, dictionary)
sql.commit()
sql.close_database()

View File

@ -1,71 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import abc
class Observer(object):
"""
Base class for classes that want to observer other classes, e.g. the
PredictorActivator.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def update(self, observable):
raise NotImplementedError("Method must be implemented")
class Oberservable(object):
"""
Base class for everything that needs observation, e.g. the predictors.
"""
def __init__(self):
self._observers = []
def attach(self, observer):
if not observer in self._observers:
self._observers.append(observer)
def detach(self, observer):
try:
self._observers.remove(observer)
except ValueError:
pass
def notify(self, modifier=None):
for observer in self._observers:
if modifier != observer:
observer.update(self)
class Dispatcher(object):
"""
Dispatches observable notifications.
"""
def __init__(self, obj):
self.observables = []
self.dispatch_dict = {}
self.obj = obj
def map(self, observable, func):
observable.attach(obj)
self.observables.append(observable)
self.dispatch_dict[observable] = func
self.dispatch(observable)
def dispatch(self, observable):
handler_func = self.dispatch_dict[observable]
handler_func(observable)

View File

@ -1,425 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
"""
Classes for predictors and to handle suggestions and predictions.
"""
from __future__ import absolute_import, unicode_literals
import os
try:
import configparser
except ImportError:
import ConfigParser as configparser
from . import dbconnector
#import pressagio.observer
MIN_PROBABILITY = 0.0
MAX_PROBABILITY = 1.0
class SuggestionException(Exception): pass
class UnknownCombinerException(Exception): pass
class PredictorRegistryException(Exception): pass
class Suggestion(object):
"""
Class for a simple suggestion, consists of a string and a probility for that
string.
"""
def __init__(self, word, probability):
print("I am a suggetsion")
self.word = word
self._probability = probability
def __eq__(self, other):
if self.word == other.word and self.probability == other.probability:
return True
return False
def __lt__(self, other):
if self.probability < other.probability:
return True
if self.probability == other.probability:
return self.word < other.word
return False
def __repr__(self):
return "Word: {0} - Probability: {1}".format(
self.word, self.probability)
def probability():
doc = "The probability property."
def fget(self):
return self._probability
def fset(self, value):
if value < MIN_PROBABILITY or value > MAX_PROBABILITY:
raise SuggestionException("Probability is too high or too low.")
self._probability = value
def fdel(self):
del self._probability
return locals()
probability = property(**probability())
class Prediction(list):
"""
Class for predictions from predictors.
"""
def __init__(self):
pass
def __eq__(self, other):
if self is other:
return True
if len(self) != len(other):
return False
for i, s in enumerate(other):
if not s == self[i]:
return False
return True
def suggestion_for_token(self, token):
for s in self:
if s.word == token:
return s
def add_suggestion(self, suggestion):
if len(self) == 0:
self.append(suggestion)
else:
i = 0
while i < len(self) and suggestion < self[i]:
i += 1
self.insert(i, suggestion)
class PredictorActivator(object):
"""
PredictorActivator starts the execution of the active predictors,
monitors their execution and collects the predictions returned, or
terminates a predictor's execution if it execedes its maximum
prediction time.
The predictions returned by the individual predictors are combined
into a single prediction by the active Combiner.
"""
def __init__(self, config, registry, context_tracker):
self.config = config
self.registry = registry
self.context_tracker = context_tracker
#self.dispatcher = pressagio.observer.Dispatcher(self)
self.predictions = []
self.combiner = None
self.max_partial_prediction_size = int(config.get(
"Selector", "suggestions"))
self.predict_time = None
self._combination_policy = None
def combination_policy():
doc = "The combination_policy property."
def fget(self):
return self._combination_policy
def fset(self, value):
self._combination_policy = value
if value.lower() == "meritocracy":
self.combiner = pressagio.combiner.MeritocracyCombiner()
else:
raise UnknownCombinerException()
def fdel(self):
del self._combination_policy
return locals()
combination_policy = property(**combination_policy())
def predict(self, multiplier = 1, prediction_filter = None):
self.predictions[:] = []
for predictor in self.registry:
self.predictions.append(predictor.predict(
self.max_partial_prediction_size * multiplier,
prediction_filter))
result = self.combiner.combine(self.predictions)
return result
class PredictorRegistry(list): #pressagio.observer.Observer,
"""
Manages instantiation and iteration through predictors and aids in
generating predictions and learning.
PredictorRegitry class holds the active predictors and provides the
interface required to obtain an iterator to the predictors.
The standard use case is: Predictor obtains an iterator from
PredictorRegistry and invokes the predict() or learn() method on each
Predictor pointed to by the iterator.
Predictor registry should eventually just be a simple wrapper around
plump.
"""
def __init__(self, config, dbconnection = None):
self.config = config
self.dbconnection = dbconnection
self._context_tracker = None
self.set_predictors()
def context_tracker():
doc = "The context_tracker property."
def fget(self):
return self._context_tracker
def fset(self, value):
if self._context_tracker is not value:
self._context_tracker = value
self[:] = []
self.set_predictors()
def fdel(self):
del self._context_tracker
return locals()
context_tracker = property(**context_tracker())
def set_predictors(self):
if (self.context_tracker):
self[:] = []
for predictor in self.config.get("PredictorRegistry", "predictors")\
.split():
self.add_predictor(predictor)
def add_predictor(self, predictor_name):
predictor = None
if self.config.get(predictor_name, "predictor_class") == \
"SmoothedNgramPredictor":
predictor = SmoothedNgramPredictor(self.config,
self.context_tracker, predictor_name,
dbconnection = self.dbconnection)
if predictor:
self.append(predictor)
def close_database(self):
for predictor in self:
predictor.close_database()
class Predictor(object):
"""
Base class for predictors.
"""
def __init__(self, config, context_tracker, predictor_name,
short_desc = None, long_desc = None):
self.short_description = short_desc
self.long_description = long_desc
self.context_tracker = context_tracker
self.name = predictor_name
self.config = config
def token_satifies_filter(token, prefix, token_filter):
if token_filter:
for char in token_filter:
candidate = prefix + char
if token.startswith(candidate):
return True
return False
class SmoothedNgramPredictor(Predictor): #, pressagio.observer.Observer
"""
Calculates prediction from n-gram model in sqlite database. You have to
create a database with the script `text2ngram` first.
"""
def __init__(self, config, context_tracker, predictor_name,
short_desc = None, long_desc = None, dbconnection = None):
Predictor.__init__(self, config, context_tracker, predictor_name,
short_desc, long_desc)
self.db = None
self.dbconnection = dbconnection
self.cardinality = None
self.learn_mode_set = False
self.dbclass = None
self.dbuser = None
self.dbpass = None
self.dbhost = None
self.dbport = None
self._database = None
self._deltas = None
self._learn_mode = None
self.config = config
self.name = predictor_name
self.context_tracker = context_tracker
self._read_config()
################################################## Properties
def deltas():
doc = "The deltas property."
def fget(self):
return self._deltas
def fset(self, value):
self._deltas = []
# make sure that values are floats
for i, d in enumerate(value):
self._deltas.append(float(d))
self.cardinality = len(value)
self.init_database_connector_if_ready()
def fdel(self):
del self._deltas
return locals()
deltas = property(**deltas())
def learn_mode():
doc = "The learn_mode property."
def fget(self):
return self._learn_mode
def fset(self, value):
self._learn_mode = value
self.learn_mode_set = True
self.init_database_connector_if_ready()
def fdel(self):
del self._learn_mode
return locals()
learn_mode = property(**learn_mode())
def database():
doc = "The database property."
def fget(self):
return self._database
def fset(self, value):
self._database = value
self.dbclass = self.config.get("Database", "class")
if self.dbclass == "PostgresDatabaseConnector":
self.dbuser = self.config.get("Database", "user")
self.dbpass = self.config.get("Database", "password")
self.dbhost = self.config.get("Database", "host")
self.dbport = self.config.get("Database", "port")
self.dblowercase = self.config.getboolean("Database",
"lowercase_mode")
self.dbnormalize = self.config.getboolean("Database",
"normalize_mode")
self.init_database_connector_if_ready()
def fdel(self):
del self._database
return locals()
database = property(**database())
#################################################### Methods
def init_database_connector_if_ready(self):
if self.database and len(self.database) > 0 and \
self.cardinality and self.cardinality > 0 and \
self.learn_mode_set:
if self.dbclass == "SqliteDatabaseConnector":
self.db = dbconnector.SqliteDatabaseConnector(
self.database, self.cardinality) #, self.learn_mode
elif self.dbclass == "PostgresDatabaseConnector":
self.db = dbconnector.PostgresDatabaseConnector(
self.database, self.cardinality, self.dbhost, self.dbport,
self.dbuser, self.dbpass, self.dbconnection)
self.db.lowercase = self.dblowercase
self.db.normalize = self.dbnormalize
self.db.open_database()
def ngram_to_string(self, ngram):
"|".join(ngram)
def predict(self, max_partial_prediction_size, filter):
print("SmoothedNgramPredictor Predicting")
print(filter)
tokens = [""] * self.cardinality
prediction = Prediction()
for i in range(self.cardinality):
tokens[self.cardinality - 1 - i] = self.context_tracker.token(i)
prefix_completion_candidates = []
for k in reversed(range(self.cardinality)):
if len(prefix_completion_candidates) >= max_partial_prediction_size:
break
prefix_ngram = tokens[(len(tokens) - k - 1):]
partial = None
if not filter:
partial = self.db.ngram_like_table(prefix_ngram,
max_partial_prediction_size - \
len(prefix_completion_candidates))
else:
partial = db.ngram_like_table_filtered(prefix_ngram, filter,
max_partial_prediction_size - \
len(prefix_completion_candidates))
print((partial))
for p in partial:
if len(prefix_completion_candidates) > \
max_partial_prediction_size:
break
candidate = p[-2] # ???
if candidate not in prefix_completion_candidates:
prefix_completion_candidates.append(candidate)
# smoothing
unigram_counts_sum = self.db.unigram_counts_sum()
for j, candidate in enumerate(prefix_completion_candidates):
#if j >= max_partial_prediction_size:
# break
tokens[self.cardinality - 1] = candidate
probability = 0
for k in range(self.cardinality):
numerator = self._count(tokens, 0, k + 1)
denominator = unigram_counts_sum
if numerator > 0:
denominator = self._count(tokens, -1, k)
frequency = 0
if denominator > 0:
frequency = float(numerator) / denominator
probability += self.deltas[k] * frequency
if probability > 0:
prediction.add_suggestion(Suggestion(tokens[self.cardinality - 1],
probability))
return(prediction)
def close_database(self):
self.db.close_database()
################################################ Private methods
def _read_config(self):
self.database = self.config.get("Database", "database")
self.deltas = self.config.get(self.name, "deltas").split()
self.learn_mode = self.config.get(self.name, "learn")
def _count(self, tokens, offset, ngram_size):
result = 0
if (ngram_size > 0):
ngram = \
tokens[len(tokens) - ngram_size + offset:\
len(tokens) + offset]
result = self.db.ngram_count(ngram)
else:
result = self.db.unigram_counts_sum()
return result

View File

@ -1,8 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE

View File

@ -1,27 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import pressagio.character
def test_first_word_character():
assert pressagio.character.first_word_character("8238$§(a)jaj2u2388!") == 7
assert pressagio.character.first_word_character("123üäö34ashdh") == 3
assert pressagio.character.first_word_character("123&(/==") == -1
def test_last_word_character():
assert pressagio.character.last_word_character("8238$§(a)jaj2u2388!") == 13
assert pressagio.character.last_word_character("123üäö34ashdh") == 12
assert pressagio.character.last_word_character("123&(/==") == -1
def test_is_word_character():
assert pressagio.character.is_word_character("ä") == True
assert pressagio.character.is_word_character("1") == False
assert pressagio.character.is_word_character(".") == False

View File

@ -1,74 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import pressagio.predictor
import pressagio.combiner
class TestMeritocracyCombiner:
def setup(self):
self.combiner = pressagio.combiner.MeritocracyCombiner()
def _create_prediction(self):
prediction = pressagio.predictor.Prediction()
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test", 0.3))
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test2", 0.3))
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test", 0.1))
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test3", 0.2))
return prediction
def _create_prediction2(self):
prediction = pressagio.predictor.Prediction()
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test2", 0.3))
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test", 0.1))
prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test3", 0.2))
return prediction
def test_filter(self):
result = self.combiner.filter(
self._create_prediction())
correct = pressagio.predictor.Prediction()
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test3", 0.2))
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test2", 0.3))
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test", 0.4))
assert result == correct
def test_combine(self):
predictions = [ self._create_prediction2() ]
prediction2 = self._create_prediction2()
prediction2.add_suggestion(pressagio.predictor.Suggestion(
"Test4", 0.1))
predictions.append(prediction2)
result = self.combiner.combine(predictions)
correct = pressagio.predictor.Prediction()
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test3", 0.4))
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test2", 0.6))
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test4", 0.1))
correct.add_suggestion(pressagio.predictor.Suggestion(
"Test", 0.2))
assert result == correct

View File

@ -1,28 +0,0 @@
Der Linksdenker von Peter Panter
"Er ist ein Gespenst und doch ein Münchner."
Alfred Polgar
Das war ein heiterer Abschied von Berlin: sechs Wochen Panke und ein Abend Karl Valentin die Rechnung ging ohne Rest auf.
Ich kam zu spät ins Theater, der Saal war bereits warm und voll Lachen. Es mochte grade begonnen haben, aber die Leute waren animiert und vergnügt wie sonst erst nach dem zweiten Akt. Am Podium der Bühne auf der Bühne, mitten in der Vorstadtkapelle, saß ein Mann mit einer aufgeklebten Perücke, er sah aus, wie man sich sonst wohl einen Provinzkomiker vorstellt: ich blickte angestrengt auf die Szene und wußte beim besten Willen nicht, was es da wohl zu lachen gäbe … Aber die Leute lachten wieder, und der Mann hatte doch gar nichts gesagt ... Und plötzlich schweifte mein Auge ab, vorn in der ersten Reihe saß noch Einer, den hatte ich bisher nicht bemerkt, und das war: ER.
Ein zaundürrer, langer Geselle, mit langen, spitzen Don-Quichotte-Beinen, mit winkligen, spitzigen Knien, einem Löchlein in der Hose, mit blankem, abgeschabtem Anzug. Sein Löchlein in der Hose er reibt eifrig daran herum. "Das wird Ihnen nichts nützen!" sagt der gestrenge Orchesterchef. Er, leise vor sich hin: "Mit Benzin wärs scho fort!" Leise sagt er das, leise, wie seine schauspielerischen Mittel. Er ist sanft und zerbrechlich, schillert in allen Farben wie eine Seifenblase; wenn er plötzlich zerplatzte, hätte sich Niemand zu wundern.
"Fertig!" klopft der Kapellmeister. Eins, zwei, drei da, einen Sechzehnteltakt zuvor, setzte der dürre Bläser ab und bedeutete dem Kapellmeister mit ernstem Zeigefinger: "s Krawattl rutscht Ihna heraus!" Aergerlich stopft sich der das Ding hinein. "Fertig!" Eins, zwei, drei … So viel, wie ein Auge Zeit braucht, die Wimper zu heben und zu senken, trennte die Kapelle noch von dem schmetternden Tusch da setzte der Lange ab und sah um sich. Der Kapellmeister klopfte ab. Was es nun wieder gäbe ? "Ich muß mal husten!" sagte der Lange. Pause. Das Orchester wartet. Aber nun kann er nicht. Eins, zwei, drei tätärätä! Es geht los.
Und es beginnt die seltsamste Komik, die wir seit langem auf der Bühne gesehen haben: ein Höllentanz der Vernunft um beide Pole des Irrsinns. Das ist eine kleine Seele, dieser Bläser, mit Verbandsorgan, Tarif, Stammtisch und Kollegenklatsch. Er ist ängstlich auf seinen vereinbarten Verdienst und ein bißchen darüber hinaus auf seinen Vorteil bedacht. "Spielen Sie genau, was da steht," sagt der Kapellmeister, "nicht zu viel und nicht zu wenig!" "Zu viel schon gar nicht!" sagt das Verbandsmitglied.
Oben auf der Bühne will der Vorhang nicht auseinander. "Geh mal sofort einer zum Tapezierer", sagt der Kapellmeister, "aber sofort, und sag ihm, er soll gelegentlich, wenn er Zeit hat, vorbeikommen." Geschieht. Der Tapezierer scheint sofort Zeit zu haben, denn er kommt mitten in die Sängerin hineingeplatzt. Steigt mit der Leiter auf die Bühne "Zu jener Zeit, wie liebt ich dich, mein Leben", heult die Sängerin und packt seine Instrumente aus, klopft, hämmert, macht … Seht doch Valentin! Er ist nicht zu halten. Was gibt es da? Was mag da sein? Er hat die Neugier der kleinen Leute. Immer geigend, denn das ist seine bezahlte Pflicht, richtet er sich hoch, steigt auf den Stuhl, reckt zwei Hälse, den seinen und den der Geige, klettert wieder herunter, schreitet durch das Orchester, nach oben auf die Bühne, steigt dort dem Tapezierer auf seiner Leiter nach, geigt und sieht, arbeitet und guckt, was es da Interessantes gibt … Ich muß lange zurückdenken, um mich zu erinnern, wann in einem Theater so gelacht worden ist.
Er denkt links. Vor Jahren hat er einmal in München in einem Bierkeller gepredigt: "Vorgestern bin ich mit meiner Großmutter in der Oper Lohengrin gewesen. Gestern nacht hat sie die ganze Oper nochmal geträumt; das wann i gwußt hätt, hätten wir gar nicht erst hingehen brauchen!"
Aber dieser Schreiber, der sich abends sein Brot durch einen kleinen Nebenverdienst aufbessert, wird plötzlich transparent, durchsichtig, über- und unterirdisch und beginnt zu leuchten. Berühren diese langen Beine noch die Erde?
Es erhebt sich das schwere Problem, eine Pauke von einem Ende der Bühne nach dem andern zu schaffen. Der Auftrag fällt auf Valentin. "I bin eigentlich a Bläser!" sagt er. Bläser schaffen keine Pauken fort. Aber, na … Laatscht hin. Allein geht es nicht. Sein Kollege soll helfen. Und hier wird die Sache durchaus mondsüchtig. "Schafft die Pauke her!" ruft der Kapellmeister ungeduldig. Der Kollege kneetscht in seinen Bart: "Muß das gleich sein?" Der Kapellmeister: "Bringt die Pauke her!" Valentin: "Der Andre laßt fragen, wann." "Der Andre" nicht: Peperl oder: Herr Schmidt oder: Kollege Hintermüller, sondern: der Andre. Der Andre wird Schicksal, Moira und nachbarlicher Kosmos. Sie drehen sich eine Weile um die Pauke, schließlich sagt "der Andre", er müsse hier stehen, denn er sei Linkshänder. Linkshänder? Vergessen sind Pauke, Kapellmeister und Theateraufführung Linkshänder! Und nun, ganz Shakespearisch: "Linkshänder bist? Alles links? Beim Schreiben auch? Beim Essen auch? Beim Schlucken auch? Beim Denken auch?" Und dann triumphierend: "Der Andre sagt, er ist links!" Welche Distanz ist da vom "Andern" wie diesseits ist man selbst, wie jenseits der Andre, wie verschieden, wie getrennt, wie weitab! Mitmensch? Nebenmensch.
Sicherlich legen wir hier das Philosophische hinein. Sicherlich hat Valentin theoretisch diese Gedankengänge nicht gehabt. Aber man zeige uns doch erst einmal einen Komiker, ein Gefäß, in das man so etwas hineinlegen kann. Bei Herrn Westermeier käme man nicht auf solche Gedanken. Hier aber erhebt sich zum Schluß eine Unterhaltung über den Zufall, ein Hin und Her, kleine magische Funken, die aus einem merkwürdig konstruierten Gehirn sprühen. Er sei Unter den Linden spaziert, mit dem Nebenmann, da hätten sie von einem Radfahrer gesprochen und da sei gerade einer des Wegs gekommen. Dies zum Kapitel: Zufall. Der Kapellmeister tobt. Das sei kein Zufall das sei Unsinn. Da kämen tausend Radfahrer täglich vorbei. "Na ja", sagt Valentin, "aber es ist grad Einer kumma!" Unvorstellbar, wie so etwas ausgedacht, geschrieben, probiert wird. Die Komik der irrealen Potentialsätze, die monströse Zerlegung des Satzes: "Ich sehe, daß er nicht da ist!" (was sich da erhebt, ist überhaupt nicht zu sagen!) die stille Dummheit dieses Witzes, der irrational ist und die leise Komponente des korrigierenden Menschenverstandes nicht aufweist, zwischendurch trinkt er aus einem Seidel Bier, kaut etwas, das er in der Tasche aufbewahrt hatte, denkt mit dem Zeigefinger und hat seine kleine Privatfreude, wenn sich der Kapellmeister geirrt hat. Eine kleine Seele. Als Hans Reimann einmal eine Rundfrage stellte, was sich Jedermann wünschen würde, wenn ihm eine Fee drei Wünsche freistellte, hat Karl Valentin geantwortet: "1.) Ewige Gesundheit. 2.) Einen Leibarzt." Eine kleine Seele.
Und ein großer Künstler. Wenn ihn nur nicht die berliner Unternehmer einfangen möchten! Das Geheimnis dieses primitiven Ensembles ist seine kräftige Naivität. Das ist eben so, und wems nicht paßt, der soll nicht zuschauen. Gott behüte, wenn man den zu Duetten und komischen Couplets abrichtete! Mit diesen verdrossenen, verquälten, nervösen Regisseuren und Direktoren auf der Probe, die nicht zuhören und zunächst einmal zu Allem Nein sagen. Mit diesem Drum und Dran von unangenehmen berliner Typen, die vorgeben, zu wissen, was das Publikum will, mit dem sie ihren nicht sehr heitern Kreis identifizieren, mit diesen überarbeiteten und unfrohen Gesellen, die nicht mehr fähig sind, von Herzen über das Einfache zu lachen, "weil es schon dagewesen ist". Sie jedenfalls sind immer schon dagewesen. Karl Valentin aber nur ein Mal, weil er ein seltener, trauriger, unirdischer, maßlos lustiger Komiker ist, der links denkt.
Quelle: http://de.wikisource.org/wiki/Der_Linksdenker

View File

@ -1,26 +0,0 @@
# Template for profiles
[Database]
class = SqliteDatabaseConnector
database = c:/Users/Peter/Projects/git-github/pressagio/src/pressagio/tests/test_data/test.db
[PredictorRegistry]
predictors = DefaultSmoothedNgramPredictor
[DefaultSmoothedNgramPredictor]
predictor_class = SmoothedNgramPredictor
deltas = 0.01 0.1 0.89
learn = True
[ContextTracker]
sliding_window_size = 80
lowercase_mode = True
[Selector]
suggestions = 6
repeat_suggestions = no
greedy_suggestion_threshold = 0
[PredictorActivator]
predict_time = 100
max_partial_prediction_size = 60
combination_policy = Meritocracy

View File

@ -1,253 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2001-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://www.cidles.eu/ltll/poio>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import os
import pressagio.dbconnector
psycopg2_installed = False
try:
import psycopg2
psycopg2_installed = True
except ImportError:
pass
class TestSqliteDatabaseConnector():
def setup(self):
self.filename = os.path.abspath(os.path.join(os.path.dirname( __file__ ),
'test_data', 'test.db'))
self.connector = pressagio.dbconnector.SqliteDatabaseConnector(self.filename)
self.connector.open_database()
def test_execute_sql(self):
self.connector.execute_sql("CREATE TABLE IF NOT EXISTS test ( c1 TEXT, c2 INTEGER );")
def test_create_ngram_table(self):
self.connector.create_ngram_table(1)
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_1_gram';")
assert result == [('_1_gram',)]
self.connector.execute_sql("DROP TABLE _1_gram;")
self.connector.create_ngram_table(2)
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_2_gram';")
assert result == [('_2_gram',)]
self.connector.execute_sql("DROP TABLE _2_gram;")
self.connector.create_ngram_table(3)
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_3_gram';")
assert result == [('_3_gram',)]
self.connector.execute_sql("DROP TABLE _3_gram;")
def test_create_index(self):
self.connector.create_ngram_table(2)
self.connector.insert_ngram(('der', 'linksdenker'), 22)
self.connector.create_index(2)
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='index' \
AND name='idx_2_gram_1';")
assert result == [('idx_2_gram_1',)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_create_unigram_table(self):
self.connector.create_unigram_table()
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_1_gram';")
assert result == [('_1_gram',)]
self.connector.execute_sql("DROP TABLE _1_gram;")
def test_create_bigram_table(self):
self.connector.create_bigram_table()
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_2_gram';")
assert result == [('_2_gram',)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_create_trigram_table(self):
self.connector.create_trigram_table()
result = self.connector.execute_sql(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_3_gram';")
assert result == [('_3_gram',)]
self.connector.execute_sql("DROP TABLE _3_gram;")
def test_insert_ngram(self):
self.connector.create_bigram_table()
self.connector.insert_ngram(('der', 'linksdenker'), 22)
result = self.connector.execute_sql("SELECT * FROM _2_gram")
assert result == [('der', 'linksdenker', 22)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_update_ngram(self):
self.connector.create_bigram_table()
# Insert
self.connector.insert_ngram(('der', 'linksdenker'), 22)
result = self.connector.execute_sql("SELECT * FROM _2_gram")
assert result == [('der', 'linksdenker', 22)]
# Update
self.connector.update_ngram(('der', 'linksdenker'), 44)
result = self.connector.execute_sql("SELECT * FROM _2_gram")
assert result == [('der', 'linksdenker', 44)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_ngram_count(self):
self.connector.create_bigram_table()
self.connector.insert_ngram(('der', 'linksdenker'), 22)
result = self.connector.ngram_count(('der', 'linksdenker'))
assert result == 22
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_ngram_like_table(self):
self.connector.create_bigram_table()
self.connector.insert_ngram(('der', 'linksdenker'), 22)
self.connector.insert_ngram(('der', 'linksabbieger'), 32)
result = self.connector.ngram_like_table(('der', 'links'))
assert result == [('der', 'linksabbieger', 32), (
'der', 'linksdenker', 22)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def teardown(self):
self.connector.close_database()
if os.path.isfile(self.filename):
os.remove(self.filename)
if psycopg2_installed:
class TestPostgresDatabaseConnector():
def setup(self):
self.connector = pressagio.dbconnector.PostgresDatabaseConnector("test")
self.connector.create_database()
self.connector.open_database()
def test_create_database(self):
self.connector.create_database()
def test_create_ngram_table(self):
self.connector.create_ngram_table(1)
result = self.connector.execute_sql(
"SELECT * FROM information_schema.tables WHERE table_name='_1_gram';")
assert len(result) == 1
self.connector.execute_sql("DROP TABLE _1_gram;")
self.connector.create_ngram_table(2)
result = self.connector.execute_sql(
"SELECT * FROM information_schema.tables WHERE table_name='_2_gram';")
assert len(result) == 1
self.connector.execute_sql("DROP TABLE _2_gram;")
self.connector.create_ngram_table(3)
result = self.connector.execute_sql(
"SELECT * FROM information_schema.tables WHERE table_name='_3_gram';")
assert len(result) == 1
self.connector.execute_sql("DROP TABLE _3_gram;")
def test_create_unigram_table(self):
self.connector.create_unigram_table()
result = self.connector.execute_sql(
"SELECT * FROM information_schema.tables WHERE table_name='_1_gram';")
assert len(result) == 1
self.connector.execute_sql("DROP TABLE _1_gram;")
def test_create_bigram_table(self):
self.connector.create_bigram_table()
result = self.connector.execute_sql(
"SELECT * FROM information_schema.tables WHERE table_name='_2_gram';")
assert len(result) == 1
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_create_trigram_table(self):
self.connector.create_trigram_table()
result = self.connector.execute_sql(
"SELECT * FROM information_schema.tables WHERE table_name='_3_gram';")
assert len(result) == 1
self.connector.execute_sql("DROP TABLE _3_gram;")
def test_insert_ngram(self):
self.connector.create_bigram_table()
self.connector.insert_ngram(('der', 'linksdenker'), 22)
result = self.connector.execute_sql("SELECT * FROM _2_gram")
assert result == [('der', 'linksdenker', 22)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_update_ngram(self):
self.connector.create_bigram_table()
# Insert
self.connector.insert_ngram(('der', 'linksdenker'), 22)
result = self.connector.execute_sql("SELECT * FROM _2_gram")
assert result == [('der', 'linksdenker', 22)]
# Update
self.connector.update_ngram(('der', 'linksdenker'), 44)
result = self.connector.execute_sql("SELECT * FROM _2_gram")
assert result == [('der', 'linksdenker', 44)]
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_ngram_count(self):
self.connector.create_bigram_table()
self.connector.insert_ngram(('der', 'linksdenker'), 22)
result = self.connector.ngram_count(('der', 'linksdenker'))
assert result == 22
self.connector.execute_sql("DROP TABLE _2_gram;")
def test_ngram_like_table(self):
self.connector.create_bigram_table()
self.connector.insert_ngram(('der', 'linksdenker'), 22)
self.connector.insert_ngram(('der', 'linksabbieger'), 32)
result = self.connector.ngram_like_table(('der', 'links'))
assert result == [('der', 'linksabbieger', 32), (
'der', 'linksdenker', 22)]
self.connector.execute_sql("DROP TABLE _2_gram;")
# testing lowercase mode
self.connector.lowercase = True
self.connector.close_database()
self.connector.reset_database()
self.connector.open_database()
self.connector.create_bigram_table()
self.connector.insert_ngram(('Der', 'Linksdenker'), 22)
self.connector.insert_ngram(('Der', 'Linksabbieger'), 32)
result = self.connector.ngram_like_table(('der', 'links'))
assert result == [('Der', 'Linksabbieger', 32), (
'Der', 'Linksdenker', 22)]
self.connector.execute_sql("DROP TABLE _2_gram;")
# testing normalize mode
self.connector.normalize = True
self.connector.close_database()
self.connector.reset_database()
self.connector.open_database()
self.connector.create_bigram_table()
self.connector.insert_ngram(('Der', 'Lünksdenker'), 22)
self.connector.insert_ngram(('Der', 'Lünksabbieger'), 32)
result = self.connector.ngram_like_table(('der', 'lunks'))
assert result == [('Der', 'Lünksabbieger', 32), (
'Der', 'Lünksdenker', 22)]
self.connector.execute_sql("DROP TABLE _2_gram;")
self.connector.normalize = False
self.connector.lowercase = False
def teardown(self):
self.connector.close_database()
con = psycopg2.connect(database="postgres", user="postgres")
con.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
c = con.cursor()
c.execute("DROP DATABASE test;")
con.close()

View File

@ -1,143 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import os
try:
import configparser
except ImportError:
import ConfigParser as configparser
import pressagio.predictor
import pressagio.tokenizer
import pressagio.dbconnector
import pressagio.context_tracker
import pressagio.callback
class TestSuggestion():
def setup(self):
self.suggestion = pressagio.predictor.Suggestion("Test", 0.3)
def test_probability(self):
self.suggestion.probability = 0.1
assert self.suggestion.probability == 0.1
class TestPrediction():
def setup(self):
self.prediction = pressagio.predictor.Prediction()
def test_add_suggestion(self):
self.prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test", 0.3))
assert self.prediction[0].word == "Test"
assert self.prediction[0].probability == 0.3
self.prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test2", 0.2))
assert self.prediction[0].word == "Test"
assert self.prediction[0].probability == 0.3
assert self.prediction[1].word == "Test2"
assert self.prediction[1].probability == 0.2
self.prediction.add_suggestion(pressagio.predictor.Suggestion(
"Test3", 0.6))
assert self.prediction[0].word == "Test3"
assert self.prediction[0].probability == 0.6
assert self.prediction[1].word == "Test"
assert self.prediction[1].probability == 0.3
assert self.prediction[2].word == "Test2"
assert self.prediction[2].probability == 0.2
self.prediction[:] = []
def test_suggestion_for_token(self):
self.prediction.add_suggestion(pressagio.predictor.Suggestion(
"Token", 0.8))
assert self.prediction.suggestion_for_token("Token").probability == 0.8
self.prediction[:] = []
class StringStreamCallback(pressagio.callback.Callback):
def __init__(self, stream):
pressagio.callback.Callback.__init__(self)
self.stream = stream
class TestSmoothedNgramPredictor():
def setup(self):
self.dbfilename = os.path.abspath(os.path.join(
os.path.dirname( __file__ ), 'test_data', 'test.db'))
self.infile = os.path.abspath(os.path.join(os.path.dirname( __file__ ),
'test_data', 'der_linksdenker.txt'))
for ngram_size in range(3):
ngram_map = pressagio.tokenizer.forward_tokenize_file(
self.infile, ngram_size + 1, False)
pressagio.dbconnector.insert_ngram_map_sqlite(ngram_map, ngram_size + 1,
self.dbfilename, False)
config_file = os.path.abspath(os.path.join(os.path.dirname( __file__ ),
'test_data', 'profile_smoothedngram.ini'))
config = configparser.ConfigParser()
config.read(config_file)
config.set("Database", "database", self.dbfilename)
self.predictor_registry = pressagio.predictor.PredictorRegistry(config)
self.callback = StringStreamCallback("")
context_tracker = pressagio.context_tracker.ContextTracker(
config, self.predictor_registry, self.callback)
def test_predict(self):
predictor = self.predictor_registry[0]
predictions = predictor.predict(6, None)
assert len(predictions) == 6
words = []
for p in predictions:
words.append(p.word)
assert "er" in words
assert "der" in words
assert "die" in words
assert "und" in words
assert "nicht" in words
self.callback.stream="d"
predictions = predictor.predict(6, None)
assert len(predictions) == 6
words = []
for p in predictions:
words.append(p.word)
assert "der" in words
assert "die" in words
assert "das" in words
assert "da" in words
assert "Der" in words
self.callback.stream="de"
predictions = predictor.predict(6, None)
assert len(predictions) == 6
words = []
for p in predictions:
words.append(p.word)
assert "der" in words
assert "Der" in words
assert "dem" in words
assert "den" in words
assert "des" in words
def teardown(self):
if self.predictor_registry[0].db:
self.predictor_registry[0].db.close_database()
del(self.predictor_registry[0])
if os.path.isfile(self.dbfilename):
os.remove(self.dbfilename)

View File

@ -1,91 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
from __future__ import absolute_import, unicode_literals
import os
import codecs
import pressagio.tokenizer
class TestForwardTokenizer():
def setup(self):
filename = os.path.abspath(os.path.join(os.path.dirname( __file__ ),
'test_data', 'der_linksdenker.txt'))
self.tokenizer = pressagio.tokenizer.ForwardTokenizer(filename)
def test_reset_stream(self):
self.tokenizer.next_token()
assert self.tokenizer.offset != 0
self.tokenizer.reset_stream()
assert self.tokenizer.offset == 0
def test_count_characters(self):
# TODO: Windows tokenization is different, check why
assert self.tokenizer.count_characters() == 7954
def test_count_tokens(self):
assert self.tokenizer.count_tokens() == 1235
def test_has_more_tokens(self):
assert self.tokenizer.has_more_tokens() == True
def test_next_token(self):
assert self.tokenizer.next_token() == "Der"
self.tokenizer.reset_stream()
def test_is_blankspace(self):
assert self.tokenizer.is_blankspace('\n') == True
assert self.tokenizer.is_blankspace('a') == False
def test_is_separator(self):
assert self.tokenizer.is_separator('"') == True
assert self.tokenizer.is_separator('b') == False
class TestReverseTokenizer():
def setup(self):
filename = os.path.abspath(os.path.join(os.path.dirname( __file__ ),
'test_data', 'der_linksdenker.txt'))
self.tokenizer = pressagio.tokenizer.ReverseTokenizer(filename)
def test_reset_stream(self):
self.tokenizer.next_token()
assert self.tokenizer.offset != self.tokenizer.offend
self.tokenizer.reset_stream()
assert self.tokenizer.offset == self.tokenizer.offend
def test_count_tokens(self):
assert self.tokenizer.count_tokens() == 1235
def test_has_more_tokens(self):
assert self.tokenizer.has_more_tokens() == True
def test_next_token(self):
assert self.tokenizer.next_token() == "Linksdenker"
self.tokenizer.reset_stream()
def test_tokenizers_are_equal():
filename = os.path.abspath(os.path.join(os.path.dirname( __file__ ),
'test_data', 'der_linksdenker.txt'))
reverse_tokenizer = pressagio.tokenizer.ReverseTokenizer(filename)
forward_tokenizer = pressagio.tokenizer.ForwardTokenizer(filename)
forward_tokens = []
reverse_tokens = []
while forward_tokenizer.has_more_tokens():
forward_tokens.append(forward_tokenizer.next_token())
while reverse_tokenizer.has_more_tokens():
reverse_tokens.append(reverse_tokenizer.next_token())
diff = set(forward_tokens) ^ set(reverse_tokens)
assert forward_tokens == reverse_tokens[::-1]
assert len(diff) == 0

View File

@ -1,289 +0,0 @@
# -*- coding: utf-8 -*-
#
# Poio Tools for Linguists
#
# Copyright (C) 2009-2013 Poio Project
# Author: Peter Bouda <pbouda@cidles.eu>
# URL: <http://media.cidles.eu/poio/>
# For license information, see LICENSE
"""
Several classes to tokenize text.
"""
from __future__ import absolute_import, unicode_literals
import abc
import codecs
import collections
from . import character
class Tokenizer(object):
"""
Base class for all tokenizers.
"""
__metaclass__ = abc.ABCMeta
def __init__(self, stream, blankspaces = character.blankspaces,
separators = character.separators):
"""
Constructor of the Tokenizer base class.
Parameters
----------
stream : str or io.IOBase
The stream to tokenize. Can be a filename or any open IO stream.
blankspaces : str
The characters that represent empty spaces.
separators : str
The characters that separate token units (e.g. word boundaries).
"""
self.separators = separators
self.blankspaces = blankspaces
self.lowercase = False
self.offbeg = 0
self.offset = None
self.offend = None
def is_blankspace(self, char):
"""
Test if a character is a blankspace.
Parameters
----------
char : str
The character to test.
Returns
-------
ret : bool
True if character is a blankspace, False otherwise.
"""
if len(char) > 1:
raise TypeError("Expected a char.")
if char in self.blankspaces:
return True
return False
def is_separator(self, char):
"""
Test if a character is a separator.
Parameters
----------
char : str
The character to test.
Returns
-------
ret : bool
True if character is a separator, False otherwise.
"""
if len(char) > 1:
raise TypeError("Expected a char.")
if char in self.separators:
return True
return False
@abc.abstractmethod
def count_characters(self):
raise NotImplementedError("Method must be implemented")
@abc.abstractmethod
def reset_stream(self):
raise NotImplementedError("Method must be implemented")
@abc.abstractmethod
def count_tokens(self):
raise NotImplementedError("Method must be implemented")
@abc.abstractmethod
def has_more_tokens(self):
raise NotImplementedError("Method must be implemented")
@abc.abstractmethod
def next_token(self):
raise NotImplementedError("Method must be implemented")
@abc.abstractmethod
def progress(self):
raise NotImplementedError("Method must be implemented")
class ForwardTokenizer(Tokenizer):
def __init__(self, stream, blankspaces = character.blankspaces,
separators = character.separators):
Tokenizer.__init__(self, stream, blankspaces, separators)
if not hasattr(stream, 'read'):
stream = codecs.open(stream, "r", "utf-8")
self.text = stream.read()
stream.close()
self.offend = self.count_characters() - 1
self.reset_stream()
def count_tokens(self):
count = 0
while(self.has_more_tokens()):
count += 1
self.next_token()
self.reset_stream()
return count
def count_characters(self):
"""
Counts the number of unicode characters in the IO stream.
"""
return len(self.text)
def has_more_tokens(self):
if self.offset < self.offend:
return True
return False
def next_token(self):
current = self.text[self.offset]
self.offset += 1
token = ""
if self.offset <= self.offend:
while (self.is_blankspace(current) or self.is_separator(current)) \
and self.offset < self.offend:
current = self.text[self.offset]
self.offset += 1
while not self.is_blankspace(current) and not self.is_separator(
current) and self.offset <= self.offend:
if self.lowercase:
current = current.lower()
token += current
current = self.text[self.offset]
self.offset += 1
if self.offset > self.offend:
token += self.text[-1]
return token
def progress(self):
return float(offset)/offend
def reset_stream(self):
self.offset = 0
class ReverseTokenizer(Tokenizer):
def __init__(self, stream, blankspaces = character.blankspaces,
separators = character.separators):
Tokenizer.__init__(self, stream, blankspaces, separators)
if not hasattr(stream, 'read'):
stream = codecs.open(stream, "r", "utf-8")
self.text = stream.read()
stream.close()
self.offend = self.count_characters() - 1
self.offset = self.offend
def count_tokens(self):
curroff = self.offset
self.offset = self.offend
count = 0
while (self.has_more_tokens()):
self.next_token()
count += 1
self.offset = curroff
return count
def count_characters(self):
"""
Counts the number of unicode characters in the IO stream.
"""
return len(self.text)
def has_more_tokens(self):
if (self.offbeg <= self.offset):
return True
else:
return False
def next_token(self):
token = ""
while (self.offbeg <= self.offset) and len(token) == 0:
current = self.text[self.offset]
if (self.offset == self.offend) and (self.is_separator(current) \
or self.is_blankspace(current)):
self.offset -= 1
return token
while (self.is_blankspace(current) or self.is_separator(current)) \
and self.offbeg < self.offset:
self.offset -= 1
if (self.offbeg <= self.offset):
current = self.text[self.offset]
while not self.is_blankspace(current) and not self.is_separator(
current) and self.offbeg <= self.offset:
if self.lowercase:
current = current.lower()
token = current + token
self.offset -= 1
if (self.offbeg <= self.offset):
current = self.text[self.offset]
return token
def progress(self):
return float(self.offend - self.offset) / (self.offend - self.offbeg)
def reset_stream(self):
self.offset = self.offend
def forward_tokenize_file(infile, ngram_size, lowercase=False, cutoff=0):
ngram_map = collections.defaultdict(int)
ngram_list = []
tokenizer = ForwardTokenizer(infile)
tokenizer.lowercase = lowercase
for i in range(ngram_size - 1):
if not tokenizer.has_more_tokens():
break
ngram_list.append(tokenizer.next_token())
while (tokenizer.has_more_tokens()):
token = tokenizer.next_token()
ngram_list.append(token)
ngram_map[tuple(ngram_list)] += 1
ngram_list.pop(0)
ngram_map_tmp = dict()
if cutoff > 0:
for k in ngram_map.keys():
if ngram_map[k] <= cutoff:
del(ngram_map[k])
return ngram_map