I tried out the CollateX Python library to see if it seems useful for visualizing similar text passages about legal doctrines, especially caselaw. I used a very simple example dataset consisting of a holding from Loper Bright Enterprises v. Raimondo (the opinion that abolished Chevron deference) and two passages from later opinions restating the holding of Raimondo (Tennessee v. Becerra and United States v. Trumbull). You can find the exact code I used in this Jupyter Notebook.
I thought CollateX’s out-of-the-box “html” mode visualization was potentially useful. In this case it might help readers notice that while the Supreme Court said a certain “argument” was not a basis for overruling a holding, one of the lower courts restated this to assert that an “error” was not a basis for overruling.
Here’s the table CollateX produced to compare the three passages.
Raimondo | Trumbull | Becerra |
---|---|---|
The | The | The |
holdings of those cases that specific agency actions are lawful—including the Clean Air Act holding of |
- | Supreme |
- | Court | Court |
- | acknowledged its " | cautioned litigants hoping to rehash or relitigate previously settled issues decided on |
Chevron | - | Chevron |
itself—are still subject to statutory stare decisis despite our |
- | that "[m]ere |
change in interpretive methodology |
change in interpretive methodology |
- |
. Mere | - | - |
reliance on Chevron cannot constitute a |
- | reliance on Chevron cannot constitute a |
" | - | - |
special justification |
- | special justification |
" | " | - |
for overruling such a holding |
meant that these precedents were "wrongly decided," but explained |
for overruling such a holding |
, because to say a precedent relied on Chevron is |
- | ." To argue as much, the Court continued |
, | - | , |
- | - | would " |
at best | - | at best |
, " | - | ," be " |
just an argument | - | just an argument |
that | that | that |
the precedent was wrongly decided |
mere error | the precedent was wrongly decided |
- | - | ." Id |
. | - | . |
That | - | And this, as the majority concluded, |
is | is | is |
- | " | " |
not enough to justify overruling a statutory precedent |
not enough to justify overruling a statutory precedent |
not enough to justify overruling a statutory precedent |
. | ." | ." |
I thought the “svg” and “svg_simple” modes were not very readable, because they were a little too chaotic and cluttered with arrows and labels. I didn’t find any option for the svg charts to be vertically oriented, so even with this relatively short text I got an extremely wide image, which Jupyter and Jekyll both wanted to shrink down to an unreadable size.
(CollateX also has an “html2” mode that uses eyeball-melting background colors to “draw your attention” to cells containing unique text.)
CollateX has a “fuzzy” matching mode, where similar strings can be shown side-by-side in the visualization. However, the fuzzy mode is only available when CollateX is set to put only a single word or “token” in each cell of the table, which seems like it would be far less readable. There are also JSON and XML output modes that could be used to export the data to alternate visualization tools.
The primary use case of CollateX is comparing alternate versions of the same work, which is not exactly what I did here. While a later court opinion might choose to restate a significant passage from an earlier opinion, it might also insert commentary or quote parts of the earlier opinion out of order. If the two texts aren’t intended to be the same, then it may not be useful for a collation to show that there are lots of differences. To determine whether it’s appropriate to show a text collation to a user, a publisher would need a reliable way of detecting situations where a later court is trying to restate a holding from an earlier opinion, but not using the exact same words.
CollateX is available under GNU General Public License v3.0.
]]>In June I was lucky enough to be sent to Braga, Portugal to represent Cornell Legal Information Institute at the 19th ICAIL conference (International Conference on Artificial Intelligence and Law). The core of this conference is an academic community rooted in knowledge-heavy AI approaches, many of them with lineage extending back at least to the 1980s. I’ve been reading work by many of these researchers for many years, so it was great to meet them in person and hear what they’ve been up to lately.
Only a few of the research projects presented had code available to run in Python. Out of those that did, I think the standout was a Legal Case-Based Reasoning system presented by Daphne Odekerken of Utrecht University. The paper was coauthored with Floris Bex and Henry Prakken, and it also cited inspiration from formal logic models of precedent developed by John Horty.
The Python package, which is just named with the initials LCBR, lets users define a legal issue and a list of factors that contribute to determining the issue’s outcome. A factor’s value can be set as a boolean, or as a number in a range. For instance, the demo application is about whether a retail sales website should be investigated as fraudulent. Boolean factors include whether the site has a terms and conditions page, and whether it has a non-functioning payment link. Numeric factors include the number of days the page has been online. If I understand right, setting up a user application with the package requires identifying the full list of factors that can be used to decide the legal issue, and also requires labeling each of the factors and “pro” or “contra”. For instance, the number of days online would be a “contra” factor where the website becomes less suspicious the longer it’s been online. I don’t think there’s any way to say that a “pro” factor can become a “contra” factor in the presence of specific other factors, so the system would only work when every factor has a certain polarity that never changes.
One great feature of LCBR is that it might not require you to have all the possible information about a particular case before you can get an answer about the outcome. If there’s no possible factor that will distinguish your case from other cases that reached a certain outcome, then the system can reach a stable conclusion that the same outcome will be reached even if new information is added later. For example, if there’s no way the facts of the current case can turn out to be more favorable to the defendant on any dimension compared to the facts of a prior case that reached a decision against a defendant, then the system reasons that the current case will go against the defendant as well. As that example suggests, LCBR does assume that all the cases in the database are consistent with one another. If not enough factors have been determined to reach a stable conclusion, LCBR also has an algorithm to determine which other factors are most relevant to decide the issue. This seems similar to the function that Docassemble uses to determine which question to present to the user next during a guided interview.
The LCBR package has a UI that can be spun up using Flask and Dash, but I wouldn’t exactly call it user-friendly because it seems to assume the user has a lot of knowledge about the theoretical concepts that inform the case-based reasoning algorithm. However, a version of the same tool was used to create a user-facing intake system for the National Police AI Lab of the Netherlands.
I’d love to see a system like LCBR expanded beyond its current limitations. When creating a model of a legal issues, I’d prefer not to have to list all factors that can bear on a determination in advance, because sometimes newly-added cases will also identify new factors. I’d also prefer not to have to identify the polarity of every factor in advance. Even the paper introducing LCBR concedes that the assumption of a consistent case base is “quite a strong assumption.” Instead of making that assumption, I’d like to use a system that can provide rules of precedent that start with an inconsistent set of cases, and then show how certain cases can be overruled or disregarded until all the cases that remain are consistent. (But when I suggest features like that, I’m thinking of what would be useful for simulating common law jurisprudence, which probably wouldn’t meet the needs of government agencies in the Netherlands.) And finally, instead of having models of “cases” covering only one legal issue, it would be nice to model a collection of rules showing ways to reach multiple legal conclusions, some of which are factors potentially supporting further conclusions. That would allow the system to evolve beyond simple one-step legal determinations such as whether to open a fraud investigation, so that the system could model more complex processes such as litigation.
]]>I’ve released version 0.9 of AuthoritySpoke. In my last blog post about AuthoritySpoke, I wrote that I had decided not to migrate all its data serialization code to Pydantic. In this post, I’ll explain why I changed my mind and did just that.
Basically, I became tired of the proliferation of messy data loading code in the AuthoritySpoke repository. That repository was the core of my “legal rule automation” project, but it was beginning to look like a cluttered workshop full of odds and ends. Every time a part of AuthoritySpoke started to look neat and coherent, I bundled it up as a separate Python package with separate documentation and moved it to a separate GitHub repository, leaving behind the messier code that didn’t quite fit together or that was hard to use.
When I created the judicial opinion download library Justopinion, I was able to choose a serializer without the burden of supporting legacy code, and Pydantic felt like the right choice, so I went with it. But then Justopinion became a dependency of AuthoritySpoke, which meant AuthoritySpoke had to import Pydantic to run. That put me on the path to adopting Pydantic for the entire AuthoritySpoke project.
The major design difference between Pydantic and the serializer I previously used, Marshmallow, is that with Pydantic the information needed to serialize objects to JSON is stored on the objects themselves, rather than in separate serializer classes. The result was that I was also able to delete a lot of old code I’d written, including several whole modules, and replace it with Pydantic’s built-in functionality.
My biggest fear about the transition was that because I’d have to make changes to all the Python classes in my project that stored data, the change might introduce bugs that I wouldn’t be able to fix, and the migration to Pydantic would simply fail. But in the end I was able to migrate every feature to Pydantic, while removing both Marshmallow and its associated API documentation library Apispec from the list of dependencies that have to be imported when AuthoritySpoke is installed.
The new version 0.9 of AuthoritySpoke has been mainly about reducing the amount of code and improving its organization, without introducing many new features. But as a result of the Pydantic transition, nearly all AuthoritySpoke classes have newly-added .dict()
and .json()
methods for serializing to generic datatypes, as well as .schema()
and .schema_json()
methods for generating JSON Schema API documentation. These serialization methods are easier to use and understand than the alternatives that existed in the past. Overall, version 0.9 is more consistent, more maintainable, less buggy, and more suitable for larger projects.
The Caselaw Access Project is one of the two best resources for free programmatic access to American caselaw data (along with CourtListener). It has a great, user-friendly website, and thoughtful documentation aimed as several different audiences. And it has a more dramatic story than most legal tech projects, in which archivists at Harvard’s law library cut the spines off of every book in an exhaustive law library collection, digitally scanned them all, but subjected the resulting archive to access restrictions for seven years from the end of the date of the scanning project. (Beware of clicking that link, if like me you jealously guard your monthly allocation of free New York Times articles.)
In the years since the API launched, it’s become significantly more useful with the addition of citation graph data. But it’s also important to recognize the limits on the API’s scope: it only includes cases published in print, and only cases published in bound volumes through 2018, when the scanning project took place. The API also limits public users to 500 API calls per day for most jurisdictions.
I created a Python module called Justopinion with a few utility functions for getting opinions from the Caselaw Access Project API. It’s mostly designed around the use case of downloading a judicial decision with a known citation, getting the text of the opinions in the case, and then downloading any other decisions cited within those opinions.
Here’s an example from Justopinion’s getting started guide that roughly follows that workflow:
from justopinion import CAPClient
client = CAPClient(api_token=CAP_API_KEY)
thornton = client.read_cite("1 Breese 34", full_case=True)
The text that gets passed to the CAPClient.read_cite
method (such as “1 Breese 34”) can be normalized as a recognizable citation thanks to the Eyecite package from the Free Law Project.
thornton.casebody.data.parties[0]
'John Thornton and others, Appellants, v. George Smiley and John Bradshaw, Appellees.'
The case is loaded as a Pydantic model, so any static analysis tools you use on your Python code should understand the data types for each field. The case.law API documentation describes what you should expect the API to deliver.
len(thornton.cites_to)
1
str(thornton.cites_to[0])
'Citation to 15 Ill., 284'
We can see that Thornton v. Smiley cites to only one other case. By passing the citation to the CAPClient.read_cite
method, we can download JSON representing the cited decision and turn it into another instance of the Decision
class.
cited = client.read_cite(thornton.cites_to[0], full_case=True)
str(cited)
'Marsh v. People, 15 Ill. 284 (1853-12-01)'
We can also locate text within an opinion we downloaded, and generate an Anchorpoint selector to reference a passage from the opinion.
thornton.opinions[0].locate_text("The court knows of no power in the administrator")
TextPositionSet{TextPositionSelector[22, 70)}
Of course, Justopinion isn’t necessary for accessing the Case Access Project API from Python. The API’s documentation gives this example of downloading a case using requests, which is a more flexible option but it might involve writing more code in some situations.
response = requests.get(
'https://api.case.law/v1/cases/435800/?full_case=true',
headers={'Authorization': 'Token abcd12345'}
)
Justopinion originated as part of my other Python library AuthoritySpoke, and as of AuthoritySpoke version 0.8, Justopinion is a dependency that gets imported as part of AuthoritySpoke’s setup process. Justopinion is still in an early state, and there are lots of features that could still be added. I decided to use the generic name Justopinion instead of naming the package after the CAP API because I’m considering also adding support for the CourtListener API, and possibly some use cases that don’t depend on an API. If you have any comments or requests about Justopinion, pleases post them at its GitHub repo.
]]>Creating a data schema for legal analysis involves plunging into abstraction. How deeply abstract the schema becomes probably depends more than we want to admit on the temperament of the person creating the schema. The more abstraction, the more powerful and expressive the schema can be, but also the greater the risk the schema will crumple under the pressure of the analyst’s assumptions or unprovable metaphysical beliefs. At the Singapore Management University Center for Computational Law, Principal Investigator Meng Weng Wong, Jason Morris, and the rest of their team went extremely deep to create an extension to Docassemble called Docassemble-L4. I salute their bravery.
Like most Docassemble extensions, Docassemble-L4 uses Python code to create an interactive interview to help apply a legal standard to a user’s fact pattern. What makes Docassemble-L4 unique is that it lets you create that Python code by translating it automatically from a different language called s(CASP), which appears to be closely related to Prolog. The intended way to get that S(CASP) code is by translating it automatically from L4. I understand L4 is a programming language that hasn’t been released yet, but it will leverage the Z3 theorem prover. To use Docassemble-L4, you must provide not just the s(CASP) code, but also a YAML document in a special format called LExSIS (no relation to the legal publisher), which tells Docassemble how to create an interview interface to elicit information relevant to the declarative logic statements in the s(CASP) code.
And the data input syntax is verbose. It suffers from a degree of combinatorial explosion since s(CASP)’s inference rules don’t seem to have a proper “or” syntax, as shown in this excerpt from the example data:
business_entity(X) :- carries_on(X,Y), business(Y), company(X), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
business_entity(X) :- carries_on(X,Y), business(Y), corporation(X), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
business_entity(X) :- carries_on(X,Y), business(Y), partnership(X), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
business_entity(X) :- carries_on(X,Y), business(Y), llp(X), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
business_entity(X) :- carries_on(X,Y), business(Y), soleprop(X), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
business_entity(X) :- carries_on(X,Y), business(Y), business_trust(X), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
business_entity(X) :- carries_on(X,Y), business(Y), not law_practice_in_singapore(X), not joint_law_venture(X), not formal_law_alliance(X), not foreign_law_practice(X), not third_schedule_institution(X).
So Docassemble-L4 is very challenging to use. But still, it’s a significant accomplishment because it moves Docassemble farther from single-purpose interviews, and toward generating conclusions by searching a large collection of legal rules derived from legislation. This is the same idea suggested by the Best Practices section of the Docassemble documentation, which suggests distributing the “mandatory” block that defines the objective of a particular interview separately from the file that defines the rest of the rules, so that the parts of the interview that are more likely to be reusable are easy to find in a separate module. Another benefit of Docassemble-L4’s roots in logic programming is that it can generate explanations for its conclusions, with links to the relevant passages of legislation. Maybe more of Docassemble-L4’s workflow can be automated in future versions.
Although I’ve been critical enough already, I have one more quibble with Docassemble-L4’s design philosophy. I think Docassemble-L4 was written with the idea that each rule written in s(CASP) will correspond to a separately-numbered section of the published legislation. I think that’s a little bit of an artificial restriction, and I think it resulted in the example data containing a lot of shorter rules connected by meta-rules about how one rule “overrides” another. If instead the legislation was thought of as containing larger rules that could span multiple numbered paragraphs, it would no longer look like there were so many contradictions within the same legal code, and there would be less need for the user to create meta-rules about which rules take precedence over others in the event of a conflict. A really expressive schema for legal analysis should include a rich syntax to describe the relationship between a legal rule and the legal documents that enable it.
]]>AuthoritySpoke version 0.7 is available on PyPI, bringing with it a new data input format using YAML files. For documentation on that feature, check out the just-published user guide or the API documentation. With this blog post I’ll go more into my reasoning in making the changes, and where I see AuthoritySpoke going next.
I planned for AuthoritySpoke to load two kinds of data: machine-serialized JSON objects, and also handmade test data. I wanted this handmade data to allow various kinds of abbreviations, and even to be tolerant of certain kinds of errors. And then I made the fateful decision to create just one set of data loading schemas for loading both kinds of data. That was probably the most costly design mistake I’ve made on AuthoritySpoke (that I know of!) so far.
The functions that expanded abbreviated text in input files turned out to be easily the most finicky and error-prone parts of AuthoritySpoke. They also had a tendency to break in inscrutable ways when I modified functions in far-away parts of AuthoritySpoke that I had assumed were safely isolated from the text expansion functions. And they caused workflows that should have been simple to become lengthy and hard to debug. It was as if all the data that I loaded into AuthoritySpoke first had to be placed on a very long conveyor belt (or to be more literal, a very tall call stack) where the data would be poked and tweaked and adjusted by a long series of functions that corrected typos, expanded abbreviations, and the like. When something went wrong, I’d have to inspect all of the functions along the conveyor belt until I found the one that wasn’t working as designed. The Marshmallow data serialization library was permissive enough to let me introduce all kinds of anomalies into the data loading process, but in some ways I used that freedom to shoot myself in the foot. And of course, when I tried to use open source libraries to automatically generate a publishable OpenAPI specification by analyzing the schemas I’d written, the result made no sense because I’d used the serializers in nonstandard ways. (AuthoritySpoke’s current OpenAPI specification is better, I think.)
Also, the first process I established for loading handmade data was for the user to create a JSON file. But really, nobody wants to create JSON files by hand without purpose-built tools. So in version 0.7, my solution is to create a separate data loading workflow for handmade data, which should now be in YAML instead of in JSON. Here’s an example of a YAML file using the new data input format, with one of the rules from the “Beard Act” test dataset that I posted about before.
- holdings:
- inputs:
- type: fact
content: "{the suspected beard} was facial hair"
- type: fact
content: the length of the suspected beard was >= 5 millimetres
- type: fact
content: the suspected beard occurred on or below the chin
outputs:
- type: fact
content: the suspected beard was a beard
enactments:
- node: /test/acts/47/4
exact:
"In this Act, beard means any facial hair no shorter than 5 millimetres
in length that: occurs on or below the chin"
universal: true
(Nobody wants to create YAML files by hand either, but that’s a problem for another day.)
The YAML data loading module can now be kept separate from the rest of AuthoritySpoke, where it’ll be less likely to hurt anyone, and the workflow for loading data from JSON won’t include any features for handling abbreviations or typos. Most importantly for me, I’ll be able to write unit tests that get closer to isolating just the functions they’re really trying to test, without touching the text formatting functions.
I considered switching from Marshmallow to the trendier Pydantic serializer, but I decided against it for two related reasons. First, the AuthoritySpoke classes that represent units of legal analysis already have a very complicated subclass inheritance pattern. Pydantic requires any class that’s going to be serialized to also inherit from a Pydantic serialization parent class. I was afraid that inheriting another subclass would have added even more complexity that could have had unforeseen consequences. Second, I’ve had good experiences applying the design concept of dependency inversion. I want to think of serialization libraries as implementation details, not as core features of AuthoritySpoke. By sticking with Marshmallow, I can keep the serialization schemas in their own modules separate from the core business logic. The core modules of AuthoritySpoke don’t have to “know about” the serializer classes, and I can write unit tests for the core business logic that don’t touch Marshmallow in any way.
The biggest challenge remaining in AuthoritySpoke’s data schema (including the simpler non-YAML schema) is that it’s a polymorphic schema, meaning more than one object schema can occur in the same place. For instance, an “input” or “output” for AuthoritySpoke’s Holding class could be a Fact, or it could be an item of Evidence, or other things. In order to implement the feature of polymorphism, AuthoritySpoke needs to import not just Marshmallow but also a related library called marshmallow-oneofschema. I’ve learned that I should get nervous when I import a software package without a large and active community, and for me the easiest way to measure that community is GitHub stars, which basically correspond to satisfied users. Marshmallow has 5,500 stars, which is not that high compared to the 21,000 stars that its competitor Django Rest Framework has. (Pydantic has 6,500.) But if I want to generate an OpenAPI specification for my Marshmallow schema, I have to also download apispec, which has 859 stars at the time of writing. Then my polymorphic schema requires me to grab marshmallow-oneofschema, which has a mere 96 stars. And then the polymorphic part of my schema needs to be included in the OpenAPI specification too, so I have to import apispec-oneofschema, which has just eight stars including mine. Pretty scary. These libraries could have trouble in the future, and I expect to be relying on them a lot as I move forward with AuthoritySpoke.
The future of AuthoritySpoke depends on getting it working with web APIs. Not to mention a web user interface. Version 0.7 does a lot to simplify one of AuthoritySpoke’s data models to make it suitable for the web. An even simpler data model would be better, but I think the foundation exists to design ways to share and organize judicial rule models on authorityspoke.com.
]]>Around the beginning of 2021, the Free Law Project extracted the code that it’s been using to link case citations within CourtListener, and released it as a new open source Python package called Eyecite. I think Eyecite could become the most widely useful open source legal analysis tool to be released by anyone so far. It seems to have incredible potential for citation network analysis, and for preparing caselaw for natural language processing. I’m sure tools like this have existed inside commercial publishers for a long time, but providing these capabilities to open source developers could make a huge difference in expanding access to law.
Eyecite is built atop two arduous research projects that were themselves released as Python packages: Courts-DB and Reporters-DB. These provide the data that lets Eyecite know which strings are valid case citations, and what courts published the opinions at each citation. Courts-DB and Reporters-DB were also created by the Free Law Project, building on earlier work by Frank Bennett and the Legal Resource Registry.
I’ll use the rest of this blog post to try out Eyecite’s basic features and give my first impressions. Eyecite is still under active development and I’m testing the version on the current master branch, which isn’t an official release version so it could be extra-buggy.
I tested Eyecite’s citation detection feature on the first paragraph of the discussion section of the US Supreme Court’s recent opinion in Google v. Oracle America.
>>> import eyecite
text_from_opinion = """Copyright and patents, the Constitution says,
are to “promote the Progress of Science and useful Arts,
by securing for limited Times to Authors and Inventors the
exclusive Right to their respective Writings and Discoveries.”
Art. I, §8, cl. 8. Copyright statutes and case law have made
clear that copyright has practical objectives. It grants an
author an exclusive right to produce his work (sometimes for
a hundred years or more), not as a special reward, but in order
to encourage the production of works that others might reproduce
more cheaply. At the same time, copyright has negative features.
Protection can raise prices to consumers. It can impose special
costs, such as the cost of contacting owners to obtain reproduction
permission. And the exclusive rights it awards can sometimes stand
in the way of others exercising their own creative powers. See
generally Twentieth Century Music Corp. v. Aiken, 422 U. S. 151,
156 (1975); Mazer v. Stein, 347 U. S. 201, 219 (1954)."""
Eyecite successfully discovered all three citations in the paragraph.
>>> citations = eyecite.get_citations(text_from_opinion)
>>> len(citations)
3
Eyecite also successfully found that the first citation wasn’t a citation to a case.
>>> citations[0]
NonopinionCitation(
token=SectionToken(
data='§8,',
start=254,
end=257),
index=93,
span_start=None,
span_end=None)
The only slight problem was that Eyecite only found three characters of the Non-opinion citation. If I needed to exclude the Non-opinion citations from the text for some reason, it would have been better if it had found the full citation text “Art. I, §8, cl. 8”.
>>> citations[0].token.data
'§8,'
Eyecite identified the other two citations in the paragraph as case citations. It came up with an amazing amount of information about them, almost all of which looks correct (it only came up with “Corp.” for the plaintiff’s name).
>>> citations[1]
FullCaseCitation(
token=CitationToken(
data='422 U. S. 151',
start=984,
end=997,
volume='422',
reporter='U. S.',
page='151',
exact_editions=(),
variation_editions=(
Edition(
reporter=Reporter(
short_name='U.S.',
name='United States Supreme Court Reports',
cite_type='federal',
is_scotus=True),
short_name='U.S.',
start=datetime.datetime(1875, 1, 1, 0, 0),
end=None),),
short=False,
extra_match_groups={}),
index=365,
span_start=None,
span_end=None,
reporter='U.S.',
page='151',
volume='422',
canonical_reporter='U.S.',
plaintiff='Corp.',
defendant='Aiken,',
pin_cite='156',
extra=None,
court='scotus',
year=1975,
parenthetical=None,
reporter_found='U. S.',
exact_editions=(),
variation_editions=(
Edition(
reporter=Reporter(
short_name='U.S.',
name='United States Supreme Court Reports',
cite_type='federal',
is_scotus=True),
short_name='U.S.',
start=datetime.datetime(1875, 1, 1, 0, 0), end=None),),
all_editions=(
Edition(
reporter=Reporter(
short_name='U.S.',
name='United States Supreme Court Reports', cite_type='federal',
is_scotus=True),
short_name='U.S.',
start=datetime.datetime(1875, 1, 1, 0, 0), end=None),),
edition_guess=Edition(
reporter=Reporter(
short_name='U.S.',
name='United States Supreme Court Reports', cite_type='federal',
is_scotus=True),
short_name='U.S.',
start=datetime.datetime(1875, 1, 1, 0, 0),
end=None)
)
Of course, the court that issued the cited opinion, and the reporter where it was published, are identified correctly.
>>> citations[1].court
'scotus'
>>> citations[1].reporter
'U.S.'
Eyecite can’t extract the exact date of the cited case, but it can get the start and end dates for the reporter series where the case was published, and it can also get the year from the parenthetical in the citation.
>>> citations[1].year
1975
It’s also worth noticing how Eyecite handles “Id.” citations. I grabbed a paragraph from the Facts section of Google v. Oracle America with an example of an “Id.” citation. But this time, because the text looks like it probably has a problem with line breaks or whitespace, I’ll also try out Eyecite’s utility function for cleaning up opinion text.
facts_section = """Google envisioned an Android platform that was free and
open, such that software developers could use the tools
found there free of charge. Its idea was that more and more
developers using its Android platform would develop ever
more Android-based applications, all of which would make
Google’s Android-based smartphones more attractive to ultimate consumers.
Consumers would then buy and use ever
more of those phones. Oracle America, Inc. v. Google Inc.,
872 F. Supp. 2d 974, 978 (ND Cal. 2012); App. 111, 464.
That vision required attracting a sizeable number of skilled
programmers.
At that time, many software developers understood and
wrote programs using the Java programming language, a
language invented by Sun Microsystems (Oracle’s predecessor). 872 F. Supp. 2d, at 975, 977. About six million programmers had spent considerable time learning, and then
using, the Java language. App. 228. Many of those programmers used Sun’s own popular Java SE platform to develop new programs primarily for use in desktop and laptop
computers. Id., at 151–152, 200. That platform allowed
developers using the Java language to write programs that
were able to run on any desktop or laptop computer, regardless of the underlying hardware (i.e., the programs were in
large part “interoperable”). 872 F. Supp. 2d, at 977. Indeed, one of Sun’s slogans was “‘write once, run anywhere.’”
886 F. 3d, at 1186."""
To use the clean_text
function, you pass a parameter containing the names of the cleaning functions you want to use.
>>> clean_facts_section = eyecite.clean_text(facts_section, ["all_whitespace"])
I can verify that the cleaning function removed some whitespace by comparing the length of the two text strings.
>>> len(facts_section) - len(clean_facts_section)
78
Running the get_citations
method again, I found that it discovered all 5 citations.
>>> facts_section_citations = eyecite.get_citations(clean_facts_section)
>>> len(facts_section_citations)
5
Eyecite has special ShortCitation
and IdCitation
classes that will capture all the information available from a citation even when it’s not a full citation. Eyecite’s string representation of the ShortCitation
class still looks a little wonky in the version I’m testing…
>>> print(facts_section_citations[1])
None, 872 F. Supp. 2d, at 975
…but by looking at the token
attribute I can see that Eyecite found a lot of useful information.
>>> facts_section_citations[1].token
CitationToken(
data='872 F. Supp. 2d, at 975',
start=757,
end=780,
volume='872',
reporter='F. Supp. 2d',
page='975',
exact_editions=(
Edition(
reporter=Reporter(
short_name='F. Supp.',
name='Federal Supplement',
cite_type='federal',
is_scotus=False),
short_name='F. Supp. 2d',
start=datetime.datetime(1988, 1, 1, 0, 0),
end=datetime.datetime(2014, 8, 21, 0, 0)),),
variation_editions=(),
short=True,
extra_match_groups={})
In the short citation 872 F. Supp. 2d, at 975, 977
, the start page of the cited opinion is omitted, but Eyecite has recognized the pin cite to two different pages.
>>> facts_section_citations[1].pin_cite
'975, 977'
The next citation is an “Id.” citation, which provides even less information than a ShortCitation.
>>> facts_section_citations[2]
Id., at 151
It looks like Eyecite wasn’t able to collect much from the “Id.” citation, other than the pin cite and the position of the citation in the text I provided.
>>> facts_section_citations[2].__dict__
{
'token': IdToken(data='Id.,', start=1041, end=1045),
'index': 310,
'span_start': None,
'span_end': 1052,
'pin_cite': 'at 151'
}
It might look like we’re going to have to match that “Id.” citation to the case it references manually. But no! Eyecite has another trick up its sleeve. If we pass an ordered list of citations to Eyecite’s resolve_citations
method, it’ll match up the Id. citation to the case cited by its antecedent citation.
>>> resolved_citations = eyecite.resolve_citations(facts_section_citations)
Basically, Eyecite will use the citations its recognizes to create Resource
objects, and then those Resources become keys for a lookup table to get all the citations that match the same Resource. When you look up the correct Resource in resolved_citations
, it gives you all the citations that refer to that Resource, including any “Id.” citations. I think this feature is still under development, and honestly I’d like to see more documentation about how to use it efficiently. But there are definitely great gains to be made from a tool that can understand “Id.” and “Supra” citations automatically.
Eyecite’s annotate
function method is exciting for anybody publishing caselaw online. It can add HTML links or other markup to the text that Eyecite just searched through for citations. CourtListener’s URL structure doesn’t seem to lend itself to automatically creating links, so instead I’ll give an example of automatically creating links to Harvard’s case.law website. I’ll start by getting a list of citations again.
>>> discussion_text = eyecite.clean_text(text_from_opinion, ["all_whitespace"])
>>> discussion_citations = eyecite.get_citations(discussion_text)
Next, I need a function that can generate the URL for a court opinion on case.law
based on its CaseCitation object. Unfortunately Eyecite’s CaseCitation object doesn’t provide the same abbreviation style that case.law uses for the names of reporter volumes, so I had to add a mockup of a conversion table using the reporter_abbreviations
variable. But the CaseCitation object does supply the volume
and page
fields for the reporter where the case is published, and the pin_cite
field seems to be easy to transform into the format case.law needs.
import re
from urllib.parse import urlunparse, ParseResult
from eyecite.models import CaseCitation
def url_from_citation(cite: CaseCitation) -> str:
"""Make a URL for linking to an opinion on case.law."""
reporter_abbreviations = {
'U.S.': "us",
"F. Supp.": "f-supp"
}
reporter = reporter_abbreviations[cite.canonical_reporter]
if cite.pin_cite:
# Assumes that the first number in the pin_cite field is
# the correct HTML fragment identifier for the URL.
page_number = re.search(r'\d+', cite.pin_cite).group()
fragment = f"p{page_number}"
else:
fragment = ""
url_parts = ParseResult(
scheme='https',
netloc='cite.case.law',
path=f'/{reporter}/{cite.volume}/{cite.page}/',
params='',
query='',
fragment=fragment)
return urlunparse(url_parts)
>>> url_from_citation(citations[2])
'https://cite.case.law/us/347/201/#p219'
Now I can write a short function to make annotations in the the expected format, and then use Eyecite to insert these links in the text anywhere that Eyecite finds a case citation.
def make_annotations(
citations: list[CaseCitation]) -> list[tuple[tuple[int, int], str, str]]:
result = []
for cite in citations:
if isinstance(cite, CaseCitation):
caselaw_url = url_from_citation(cite)
result.append(
(cite.span(),
f'<a href="{caselaw_url}">',
"</a>")
)
return result
>>> annotations = make_annotations(discussion_citations)
>>> annotated_text = eyecite.annotate(discussion_text, annotations)
>>> print(annotated_text)
Copyright and patents, the Constitution says, are to “promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Art. I, §8, cl. 8. Copyright statutes and case law have made clear that copyright has practical objectives. It grants an author an exclusive right to produce his work (sometimes for a hundred years or more), not as a special reward, but in order to encourage the production of works that others might reproduce more cheaply. At the same time, copyright has negative features. Protection can raise prices to consumers. It can impose special costs, such as the cost of contacting owners to obtain reproduction permission. And the exclusive rights it awards can sometimes stand in the way of others exercising their own creative powers. See generally Twentieth Century Music Corp. v. Aiken, <a href="https://cite.case.law/us/422/151/#p156">422 U. S. 151</a>, 156 (1975); Mazer v. Stein, <a href="https://cite.case.law/us/347/201/#p219">347 U. S. 201</a>, 219 (1954).
We can see that the annotate
function has inserted hyperlink markup around the citations near the end of the text passage. And by displaying the text as Markdown, we can verify that the generated links go to the right places on case.law.
>>> from IPython.display import display, Markdown
>>> display(Markdown(annotated_text))
Copyright and patents, the Constitution says, are to “promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Art. I, §8, cl. 8. Copyright statutes and case law have made clear that copyright has practical objectives. It grants an author an exclusive right to produce his work (sometimes for a hundred years or more), not as a special reward, but in order to encourage the production of works that others might reproduce more cheaply. At the same time, copyright has negative features. Protection can raise prices to consumers. It can impose special costs, such as the cost of contacting owners to obtain reproduction permission. And the exclusive rights it awards can sometimes stand in the way of others exercising their own creative powers. See generally Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 156 (1975); Mazer v. Stein, 347 U. S. 201, 219 (1954).
Overall, Eyecite is a powerful tool with great potential to help the legal field gain the benefits of Python’s data analysis and data science ecosystem.
]]>One of the most difficult AuthoritySpoke features for users to understand has been the ability for the Factors of legal rules to have “generic context” affecting how the rules can be compared to one another. This article will try to make that concept a little clearer, and also describe how to use contexts with comparison methods like .means()
, .implies()
, and .contradicts()
.
As mentioned before, the text of a Factor is like a phrase, and the terms of the Factor are like the nouns that function as the subject and objects of the phrase. If a Factor’s terms are labeled as “generic”, then the Factor can be considered to have the same meaning as another Factor that has different terms, as long as the other Factor’s terms are also labeled “generic”.
Here’s an example (based on the documentation for Nettlesome, which is a dependency of AuthoritySpoke). Each Entity object is “generic” by default, so all four of the “terms” in the example are considered “generic”. I’ve also given each Entity a generic-sounding name, instead of a proper noun, to emphasize that each Fact could refer to many different people. However, nothing in AuthoritySpoke will stop you from using proper nouns as the names of generic terms. All the examples in this article were tested on version 0.6.0 of AuthoritySpoke.
>>> from authorityspoke import Fact, Entity
>>> poet_payment = Fact("$payor made a payment to $payee",
... terms=[Entity("the fierce philanthropist"), Entity("the starving poet")])
>>> henchman_payment = Statement("$payor made a payment to $payee",
... terms=[Entity("the affable spy"), Entity("the devious henchman")])
>>> print(henchman_payment)
the statement that <the affable spy> made a payment to <the devious henchman>
The means() method can be used to determine whether two Facts have the same meaning.
>>> poet_payment.means(henchman_payment)
True
The explain_same_meaning() method generates an Explanation showing why the two Facts can be considered to have the same meaning.
>>> print(poet_payment.explain_same_meaning(henchman_payment))
"""Because <the fierce philanthropist> is like <the affable spy>, and <the starving poet> is like <the devious henchman>,
the fact that <the fierce philanthropist> made a payment to <the starving poet>
MEANS
the fact that <the affable spy> made a payment to <the devious henchman>"""
The illustration above shows that when AuthoritySpoke finds two Facts to be the same, it means that the relationships they describe are the same. It doesn’t at all mean that the identities of the “generic” Entities are the same.
In AuthoritySpoke, the “context” of a comparison dictates which generic terms are considered parallel to one another. If no context parameter is passed in to a comparison method like .means(), then the comparison method will try every permutation of both Facts’ generic terms to find ways to match them all. Passing in a context parameter limits the ways that the generic terms can be matched.
In this example, the two Facts are compared in a context where it has been established that the philanthropist is more like the henchman, and the poet is more like the spy. (After all, to AuthoritySpoke these names are just strings, and AuthoritySpoke has no idea that they don’t sound similar.) To create this context, we pass in a tuple of two lists: a list of terms on the left that will be replaced, and a list of replacements from the right in the corresponding order. Because the context precludes the first term of poet_payment
from being matched to the first term of henchman_payment
, the method that checks whether these Facts have the same meaning now returns False.
>>> poet_payment.means(henchman_payment)
... context=(
... [Entity("the fierce philanthropist"), Entity("the starving poet")],
... [Entity("the devious henchman"), Entity("the affable spy")])
... )
False
Instead of using .means()
, we could also have used the context parameter in the same format to test whether the first Factor .implies()
or .contradicts()
the other. There’s more information about using a context parameter with comparison methods in the Nettlesome documentation.
Up to now, we’ve only compared Factors outside the scope of any legal rule or judicial holding. To build on the ideas above and do a little real legal analysis, let’s see an example of how generic context works in an example Holding based on United States v. Harmon, a recent case from the District Court of the District of Columbia.
First, we use AuthoritySpoke’s legislation download client to get the statute being interpreted. If you try out this code yourself, you’ll need to follow the directions for creating an environment variable called “LEGISLICE_API_TOKEN”.
>>> import os
>>> from dotenv import load_dotenv
>>> from authorityspoke.io.downloads import Client
>>> load_dotenv()
>>> LEGISLICE_API_TOKEN = os.getenv("LEGISLICE_API_TOKEN")
>>> CLIENT = Client(api_token=LEGISLICE_API_TOKEN)
>>> offense_statute = CLIENT.read("/us/usc/t18/s1960/a")
>>> print(offense_statute)
'"Whoever knowingly conducts, controls, manages, supervises, directs, or owns all or part of an unlicensed money transmitting business, shall be fined in accordance with this title or imprisoned not more than 5 years, or both." (/us/usc/t18/s1960/a 2013-07-18)'
Next, we create the Facts that the court found to be relevant to the elements of a criminal offense, and we combine them into a Holding.
>>> from authorityspoke import Entity, Fact, Holding, Predicate
>>> no_license = Fact(
... "$business was licensed as a money transmitting business",
... truth=False,
... terms=Entity("Helix"))
>>> operated = Fact(
... "$person operated $business as a business",
... terms=[Entity("Harmon"), Entity("Helix")])
>>> transmitting = Fact(
... "$business was a money transmitting business",
... terms=Entity("Helix"))
>>> offense = Fact(
... "$person committed the offense of conducting an unlicensed money transmitting business",
... terms=Entity("Harmon"))
>>> offense_holding = Holding.from_factors(
... inputs=[operated, transmitting, no_license],
... outputs=offense,
... enactments=offense_statute,
... universal=True)
This Holding simply says that if a person has committed the elements of the offense, the court may convict the person of the offense.
>>> print(offense_holding)
"""the Holding to ACCEPT
the Rule that the court MAY ALWAYS impose the
RESULT:
the fact that <Harmon> committed the offense of conducting an
unlicensed money transmitting business
GIVEN:
the fact that <Harmon> operated <Helix> as a business
the fact that <Helix> was a money transmitting business
the fact it was false that <Helix> was licensed as a money
transmitting business
GIVEN the ENACTMENT:
"Whoever knowingly conducts, controls, manages, supervises, directs, or owns all or part of an unlicensed money transmitting business, shall be fined in accordance with this title or imprisoned not more than 5 years, or both." (/us/usc/t18/s1960/a 2013-07-18)"""
And then we can create the more important Holding of the case, in which the court found that a bitcoin transmitting business met the statutory definition of a “money transmitting” business requiring a license.
>>> definition_statute = CLIENT.read("/us/usc/t18/s1960/b/2")
>>> bitcoin = Fact(
... "$business transferred bitcoin on behalf of the public",
... terms=Entity("Helix"))
>>> bitcoin_holding = Holding.from_factors(
... inputs=bitcoin,
... outputs=transmitting,
... enactments=definition_statute,
... universal=True)
>>> print(bitcoin_holding)
"""the Holding to ACCEPT
the Rule that the court MAY ALWAYS impose the
RESULT:
the fact that <Helix> was a money transmitting business
GIVEN:
the fact that <Helix> transferred bitcoin on behalf of the public
GIVEN the ENACTMENT:
"the term “money transmitting” includes transferring funds on behalf of the public by any and all means including but not limited to transfers within this country or to locations abroad by wire, check, draft, facsimile, or courier; and" (/us/usc/t18/s1960/b/2 2013-07-18)"""
By adding the two Holdings above, we get a new Holding indicating that if a person operated a business that transferred bitcoin on behalf of the public without a “money transmitting business” license, the person may be found guilty of the offense. To generate this Holding, AuthoritySpoke finds that the terms named “Harmon” and “Helix” in offense_holding
can be matched to the terms with the same names in bitcoin_holding
.
>>> result = bitcoin_holding + offense_holding
>>> print(result)
"""the Holding to ACCEPT
the Rule that the court MAY ALWAYS impose the
RESULT:
the fact that <Harmon> committed the offense of conducting an
unlicensed money transmitting business
the fact that <Helix> was a money transmitting business
GIVEN:
the fact that <Harmon> operated <Helix> as a business
the fact it was false that <Helix> was licensed as a money
transmitting business
the fact that <Helix> transferred bitcoin on behalf of the public
GIVEN the ENACTMENTS:
"Whoever knowingly conducts, controls, manages, supervises, directs, or owns all or part of an unlicensed money transmitting business, shall be fined in accordance with this title or imprisoned not more than 5 years, or both." (/us/usc/t18/s1960/a 2013-07-18)
"the term “money transmitting” includes transferring funds on behalf of the public by any and all means including but not limited to transfers within this country or to locations abroad by wire, check, draft, facsimile, or courier; and" (/us/usc/t18/s1960/b/2 2013-07-18)"""
Finally, once that Holding is created, we can also generalize the Holding by using a new context to apply it to different generic terms.
>>> result_with_new_context = result.new_context(
... ([Entity("Harmon"), Entity("Helix")],
... [Entity("Schrute"), Entity("Schrute Bucks")]))
>>> print(result_with_new_context)
"""the Holding to ACCEPT
the Rule that the court MAY ALWAYS impose the
RESULT:
the fact that <Schrute> committed the offense of conducting an
unlicensed money transmitting business
the fact that <Schrute Bucks> was a money transmitting business
GIVEN:
the fact that <Schrute> operated <Schrute Bucks> as a business
the fact it was false that <Schrute Bucks> was licensed as a money
transmitting business
the fact that <Schrute Bucks> transferred bitcoin on behalf of the
public
GIVEN the ENACTMENTS:
"Whoever knowingly conducts, controls, manages, supervises, directs, or owns all or part of an unlicensed money transmitting business, shall be fined in accordance with this title or imprisoned not more than 5 years, or both." (/us/usc/t18/s1960/a 2013-07-18)
"the term “money transmitting” includes transferring funds on behalf of the public by any and all means including but not limited to transfers within this country or to locations abroad by wire, check, draft, facsimile, or courier; and" (/us/usc/t18/s1960/b/2 2013-07-18)"""
I’m happy to announce I’ve published a new Python package called Nettlesome, for creating computable semantic tags that describe the contents of documents. When you browse through Nettlesome’s documentation, you’ll see a lot of concepts that look like refugees from logic programming, like Terms and Predicates. And yet Nettlesome doesn’t have any way to create a set of assertions about which Statements imply one another, nor does it have a function for theorem proving. So why does Nettlesome exist, and what gap does it fill?
Nettlesome originated as a spinoff of my primary Python project, AuthoritySpoke. AuthoritySpoke is a library that enables semantic reasoning with legal data. But I eventually found that AuthoritySpoke needed distinct layers: the top layer of legal analysis was built atop a layer of general-purpose semantic reasoning tasks, such as testing whether quantitative statements imply or contradict one another. Moving the general-purpose features to the separate Nettlesome package should help keep the code more organized. It also means that I have the freedom to make AuthoritySpoke more specific and limited, because I know Nettlesome will exist separately to support other projects that might develop in other directions. This way, AuthoritySpoke doesn’t have to be a library for “all legal analysis”. It can be limited to just common law jurisdictions, just the United States, just the judicial branch of government, or whatever. Nettlesome, on the other hand, can be a “legaltech” package without the baggage of any particular legal system or legal industry.
Separating Nettlesome into its own package has also made it easier for me to integrate Nettlesome with Sympy, the Python library for symbolic math. In the short term, importing Sympy has helped me delete some poorly-written math functions of mine, and replace them with some well-written imported code I’ll hopefully never need to help maintain. But in the longer term I think future versions of Nettlesome (and AuthoritySpoke) can lean on Sympy to enable more complex math calculations, like finding the sum of the different parts of a monetary judgment or a criminal sentence. I’ve already found the Pint package to be easy to use and integrate. Pint supports calculations with quantities measured in all the familiar units from a physics class: minutes, kilometers per hour, square feet, ounces, and so on. Here’s an example calculation where Pint, Sympy, and Nettlesome are all working together:
>>> from nettlesome import Comparison, Entity, Statement
>>> under_1mi = Comparison("the distance from $site1 to $site2 was", sign="<", expression="1 mile")
>>> under_2km = Comparison("the distance from $site1 to $site2 was", sign="<", expression="2 kilometers")
>>> meeting = Entity("the political convention")
>>> protest = Entity("the free speech zone")
>>> under_1mi_to_protest = Statement(under_1mi, terms=[protest, meeting])
>>> under_2km_to_protest = Statement(under_2km, terms=[meeting, protest])
>>> str(under_1mi_to_protest)
'the statement that the distance from <the free speech zone> to <the political convention> was less than 1 mile'
>>> str(under_2km_to_protest)
'the statement that the distance from <the political convention> to <the free speech zone> was less than 2 kilometer'
>>> under_1mi_to_protest.implies(under_2km_to_protest)
True
>>> under_2km_to_protest.implies(under_1mi_to_protest)
False
>>> under_1mi_to_protest.contradicts(under_2km_to_protest)
False
>>> under_2km_to_protest.contradicts(under_1mi_to_protest)
False
So please check out the introduction and examples in Nettlesome’s documentation, import Nettlesome from the Python Package Index, try it out, and open a Github issue for any bugs, suggestions, concerns, or feature requests.
]]>The AuthoritySpoke library provides you with Python classes that you can use to represent a limited subset of English statements, so you can create computable annotations representing aspects of legal reasoning and factfinding. In the newly-released version 0.5 of AuthoritySpoke, I’ve redesigned the interface for creating these phrases to use Python template strings, which I think should make the interface more idiomatic and consistent with existing Python programming patterns.
I chose to implement this feature with template strings instead of Python’s more powerful methods for inserting data into text, such as f-strings or the .format()
method, because template strings’ relative lack of versatility makes them more predictable and less bug-prone. Template strings don’t execute any code when they run, so they present less of a security problem and they can be used with untrusted user-generated data. Since this is a big change to the interface, I’ll provide a variety of examples of how template strings will work with other AuthoritySpoke classes. These examples can also be found in the documentation, which may be updated more frequently.
Here’s an example of a template string used to create a Predicate object in AuthoritySpoke version 0.5:
>>> from authorityspoke import Predicate
>>> parent_sentence = Predicate("$mother was ${child}'s parent")
The phrase that we passed to the Predicate constructor is used to create a Python template string. Template strings are part of the Python standard library. The dollar signs and curly brackets are special symbols used to indicate placeholders in Python’s template string syntax.
AuthoritySpoke’s Predicate class is structured like a statement in Predicate logic. The Predicate is like a partial sentence with blank spaces marked by placeholders. The placeholders can be replaced by nouns that become the subjects or objects of this potential sentence. The Predicate class isn’t intended to turn Python into a logic programming environment like Prolog, nor is it designed to be used as an interface to a deep learning language model like GPT-3. Instead, the Predicate class is designed to help you run computations over a curated set of annotations where you can usually guarantee that the same phrasing has been used for the same legal concept.
Here’s an example of how Python lets you replace the placeholders in a template string with new text.
>>> parent_sentence.template.substitute(mother="Ann", child="Bob")
"Ann was Bob's parent"
Don’t worry: the use of the past tense doesn’t indicate that a tragedy has befallen Ann or Bob. The Predicate class is designed to be used only with an English-language phrase in the past tense. The past tense is used because legal analysis is usually backward-looking, determining the legal effect of past acts or past conditions. Don’t use capitalization or end punctuation to signal the beginning or end of the phrase, because the phrase may be used in a context where it’s only part of a longer sentence.
Predicates can be compared using AuthoritySpoke’s .means()
, .implies()
, and .contradicts()
methods. The means
method checks whether one Predicate has the same meaning as another Predicate. One reason for comparing Predicates using the means
method instead of Python’s ==
operator is that the means
method can still consider Predicates to have the same meaning even if they use different identifiers for their placeholders.
>>> another_parent_sentence = Predicate("$adult was ${kid}'s parent")
>>> parent_sentence.template == another_parent_sentence.template
False
>>> another_parent_sentence.means(parent_sentence)
True
You can also add a truth
attribute to a Predicate to indicate whether the statement described by the template is considered true or false. AuthoritySpoke can then use that attribute to evaluate relationships between the truth values of different Predicates with the same template text. If you omit a truth
parameter when creating a Predicate, the default value is True
.
>>> not_parent_sentence = Predicate("$adult was ${kid}'s parent", truth=False)
>>> str(not_parent_sentence)
"it was false that $adult was ${kid}'s parent"
>>> parent_sentence.means(not_parent_sentence)
False
>>> parent_sentence.contradicts(not_parent_sentence)
True
In the parent_sentence
example above, there are really two different placeholder formats. The first placeholder, mother
, is just preceded by a dollar sign. The second placeholder, child
, is preceded by a dollar sign and an open curly bracket, and followed by a closed curly bracket. These formats aren’t specific to AuthoritySpoke; they’re part of the Python standard library. The difference is that the format with just the dollar sign can only be used for a placeholder that is surrounded by whitespace. If the placeholder is next to some other character, like an apostrophe, then you need to use the “braced” format with the curly brackets. The placeholders themselves need to be valid Python identifiers, which means they can only be made up of letters, numbers, and underscores, and they can’t start with a number. Docassemble users might already be familiar with these rules, since Docassemble variables also have to be Python identifiers. Check out Docassemble’s documentation for more guidance on creating valid Python identifiers.
AuthoritySpoke’s Comparison
class extends the concept of a Predicate
. A Comparison
still contains a truth
value and a template
string, but that template should be used to identify a quantity that will be compared to an expression
using a sign
such as an equal sign or a greater-than sign. This expression
must be a constant: either an integer, a floating point number, a date, or a physical quantity expressed in units that can be parsed using the pint library. To encourage consistent phrasing, the template string in every Comparison object must end with the word “was”. AuthoritySpoke will then build the rest of the phrase using the comparison sign and expression that you provide.
To use a measurement as a Comparison’s expression
, pass the measurement as a string when constructing the Comparison object, and it will be converted to a pint
Quantity.
>>> from authorityspoke import Comparison
>>> drug_comparison = Comparison(
>>> "the weight of marijuana that $defendant possessed was",
>>> sign=">=",
>>> expression="0.5 kilograms")
>>> str(drug_comparison)
'that the weight of marijuana that $defendant possessed was at least 0.5 kilogram'
(The pint library always uses singular nouns for units like “kilogram”, when rendering them as text.)
By making the quantitative part of the phrase explicit, you make it possible for AuthoritySpoke to consider quantities when checking whether one Comparison implies or contradicts another.
>>> smaller_drug_comparison = Comparison(
>>> "the weight of marijuana that $defendant possessed was",
>>> sign=">=",
>>> expression="250 grams")
>>> str(smaller_drug_comparison)
'that the weight of marijuana that $defendant possessed was at least 250 gram'
AuthoritySpoke will understand that if the weight was at least 0.5 kilograms, that implies it was also at least 250 grams.
>>> drug_comparison.implies(smaller_drug_comparison)
True
If you phrase a Comparison with an inequality sign using truth=False
, AuthoritySpoke will silently modify your statement so it can have truth=True
with a different sign. In this example, the user’s input indicates that it’s false that the weight of the marijuana was more than 10 grams. AuthoritySpoke interprets this to mean it’s true that the weight was no more than 10 grams.
>>> drug_comparison_with_upper_bound = Comparison(
>>> "the weight of marijuana that $defendant possessed was",
>>> sign=">",
>>> expression="10 grams",
>>> truth=False)
>>> str(drug_comparison_with_upper_bound)
'that the weight of marijuana that $defendant possessed was no more than 10 gram'
Of course, this Comparison contradicts the other Comparisons that asserted the weight was much greater.
>>> drug_comparison_with_upper_bound.contradicts(drug_comparison)
True
The unit that the Comparison parses doesn’t have to be weight. It could also be distance, time, volume, units of surface area such as square kilometers or acres, or units that combine multiple dimensions such as miles per hour or meters per second.
When the number needed for a Comparison isn’t a physical quantity that can be described with the units in the pint
library, you should phrase the text in the template string to explain what the number describes. The template string will still need to end with the word “was”. The value of the expression parameter should be an integer or a floating point number, not a string to be parsed.
>>> three_children = Comparison(
>>> "the number of children in ${taxpayer}'s household was",
>>> sign="=",
>>> expression=3)
>>> str(three_children)
"that the number of children in ${taxpayer}'s household was exactly equal to 3"
The numeric expression will still be available for comparison methods like implies
or contradicts
, but no unit conversion will be available.
>>> at_least_two_children = Comparison(
>>> "the number of children in ${taxpayer}'s household was",
>>> sign=">=",
>>> expression=2)
>>> three_children.implies(at_least_two_children)
True
Floating point comparisons work similarly.
>>> specific_tax_rate = Comparison(
>>> "${taxpayer}'s marginal income tax rate was",
>>> sign="=",
>>> expression=.3)
>>> tax_rate_over_25 = Comparison("${taxpayer}'s marginal income tax rate was", sign=">", expression=.25)
>>> specific_tax_rate.implies(tax_rate_over_25)
True
The expression
field of a Comparison can be a datetime.date
.
>>> from datetime import date
>>> copyright_date_range = Comparison("the date when $work was created was", sign=">=", expression = date(1978,1,1))
>>> str(copyright_date_range)
'that the date when $work was created was at least 1978-01-01'
And date
s and date
ranges can be compared with each other, similar to how numbers can be compared to number ranges.
>>> copyright_date_specific = Comparison("the date when $work was created was", sign="=", expression = date(1980,6,20))
>>> copyright_date_specific.implies(copyright_date_range)
True
AuthoritySpoke isn’t limited to comparing Predicates and Comparisons containing unassigned placeholder text. You can use Entity objects to assign specific terms to the placeholders. You then link the terms to the Predicate or Comparison inside a Fact object.
>>> from authorityspoke import Entity, Fact
>>> ann = Entity("Ann", generic=False)
>>> claude = Entity("Claude", generic=False)
>>> ann_tax_rate = Fact(specific_tax_rate, terms=ann)
>>> claude_tax_rate = Fact(tax_rate_over_25, terms=claude)
>>> str(ann_tax_rate)
"the fact that Ann's marginal income tax rate was exactly equal to 0.3"
>>> str(claude_tax_rate)
"the fact that Claude's marginal income tax rate was greater than 0.25"
Before, we saw that the Comparison specific_tax_rate
implies tax_rate_over_25
. But when we have a fact about the tax rate of a specific person named Ann, it doesn’t imply anything about Claude’s tax rate.
>>> ann_tax_rate.implies(claude_tax_rate)
False
That seems to be the right answer in this case. But sometimes, in legal reasoning, we want to refer to people in a generic sense. We might want to say that a statement about one person can imply a statement about a different person, because most legal rulings can be generalized to apply to many different people regardless of exactly who those people are. To illustrate that idea, let’s create two “generic” people and show that a Fact about one of them implies a Fact about the other.
>>> devon = Entity("Devon", generic=True)
>>> elaine = Entity("Elaine", generic=True)
>>> devon_tax_rate = Fact(specific_tax_rate, terms=devon)
>>> elaine_tax_rate = Fact(tax_rate_over_25, terms=elaine)
>>> devon_tax_rate.implies(elaine_tax_rate)
True
In the string representations of Facts, generic Entities are shown in angle brackets as a reminder that they may be considered to correspond to different Entities when being compared to other objects.
>>> str(devon_tax_rate)
"the fact that <Devon>'s marginal income tax rate was exactly equal to 0.3"
>>> str(elaine_tax_rate)
"the fact that <Elaine>'s marginal income tax rate was greater than 0.25"
When the implies
method produces the answer True
, we can also use the explain_implication
method to find out which pairs of generic terms can be considered analagous to one another.
>>> explanation = devon_tax_rate.explain_implication(elaine_tax_rate)
>>> str(explanation)
'ContextRegister(<Devon> is like <Elaine>)'
If for some reason you need to mention the same term more than once in a Predicate or Comparison, use the same placeholder for that term each time. When you provide a sequence of terms for the Fact object using that Predicate, only include each unique term once. The terms should be listed in the same order that they first appear in the template text.
>>> opened_account = Fact(
>>> Predicate("$applicant opened a bank account for $applicant and $cosigner"),
>>> terms=(devon, elaine))
>>> str(opened_account)
'the fact that <Devon> opened a bank account for <Devon> and <Elaine>'
Sometimes, a Predicate or Comparison needs to mention two terms that are different from each other, but that have interchangeable positions in that particular phrase. To convey interchangeability, the template string should use identical text for the placeholders for the interchangeable terms, except that the different placeholders should each end with a different digit.
>>> ann = Entity("Ann", generic=False)
>>> bob = Entity("Bob", generic=False)
>>> same_family = Predicate(
>>> "$relative1 and $relative2 both were members of the same family")
>>> ann_and_bob_were_family = Fact(
>>> same_family,
>>> terms=(ann, bob))
>>> str(ann_and_bob_were_family)
'the fact that Ann and Bob both were members of the same family'
>>> bob_and_ann_were_family = Fact(
>>> same_family,
>>> terms=(bob, ann))
>>> str(bob_and_ann_were_family)
'the fact that Bob and Ann both were members of the same family'
>>> ann_and_bob_were_family.means(bob_and_ann_were_family)
True
If you choose placeholders that don’t fit the pattern of being identical except for a final digit, then transposing two non-generic terms will change the meaning of the Fact.
>>> parent_sentence = Predicate("$mother was ${child}'s parent")
>>> ann_is_parent = Fact(parent_sentence, terms = (ann, bob))
>>> bob_is_parent = Fact(parent_sentence, terms = (bob, ann))
>>> str(ann_is_parent)
"the fact that Ann was Bob's parent"
>>> str(bob_is_parent)
"the fact that Bob was Ann's parent"
>>> ann_is_parent.means(bob_is_parent)
False
In AuthoritySpoke, terms referenced by a Predicate or Comparison can contain references to Facts as well as Entities. That mean they can include the text of other Predicates. This feature is intended for incorporating references to what people said, knew, or believed.
>>> statement = Predicate("$speaker told $listener $event")
>>> bob_had_drugs = Fact(smaller_drug_comparison, terms=bob)
>>> bob_told_ann_about_drugs = Fact(statement, terms=(bob, ann, bob_had_drugs))
>>> str(bob_told_ann_about_drugs)
'the fact that Bob told Ann the fact that the weight of marijuana that Bob possessed was at least 250 gram'
A higher-order Predicate can be used to establish that one Fact implies another. In legal reasoning, it’s common to accept that if a person knew or communicated something, then the person also knew or communicated any facts that are obviously implied by what the person actually knew or said. In this example, the fact that Bob told Ann he possessed more than 0.5 kilograms means he also told Ann that he possessed more than 250 grams.
>>> bob_had_more_drugs = Fact(drug_comparison, terms=bob)
>>> bob_told_ann_about_more_drugs = Fact(
>>> statement,
>>> terms=(bob, ann, bob_had_more_drugs))
>>> str(bob_told_ann_about_more_drugs)
'the fact that Bob told Ann the fact that the weight of marijuana that Bob possessed was at least 0.5 kilogram'
>>> bob_told_ann_about_more_drugs.implies(bob_told_ann_about_drugs)
True
However, a contradiction between Facts referenced in higher-order Predicates doesn’t cause the first-order Facts to contradict one another. It’s not contradictory to say that a person has said two contradictory things.
>>> bob_had_less_drugs = Fact(drug_comparison_with_upper_bound, terms=bob)
>>> bob_told_ann_about_less_drugs = Fact(
>>> statement,
>>> terms=(bob, ann, bob_had_less_drugs))
>>> str(bob_told_ann_about_less_drugs)
'the fact that Bob told Ann the fact that the weight of marijuana that Bob possessed was no more than 10 gram'
>>> bob_told_ann_about_less_drugs.contradicts(bob_told_ann_about_more_drugs)
False
Higher-order Facts can refer to terms that weren’t referenced by the first-order Fact. AuthoritySpoke will recognize that the use of different terms in the second-order Fact changes the meaning of the first-order Fact.
>>> claude_had_drugs = Fact(smaller_drug_comparison, terms=claude)
>>> bob_told_ann_about_claude = Fact(
>>> statement,
>>> terms=(bob, ann, claude_had_drugs))
>>> str(bob_told_ann_about_claude)
'the fact that Bob told Ann the fact that the weight of marijuana that Claude possessed was at least 250 gram'
>>> bob_told_ann_about_drugs.implies(bob_told_ann_about_claude)
False
Visit the AuthoritySpoke documentation for more information including instructions for installing AuthoritySpoke from the Python Package Index. Please open an issue in AuthoritySpoke’s GitHub repo if you have any problems, including any problems using the documentation. On Twitter, follow me at @mcareyaus, or follow @authorityspoke for project updates.
]]>