Skip to content

A new package that analyzes user-submitted text queries about community moderation or platform policies (like why small voting projects get flagged) and returns structured insights. It uses an LLM to

Notifications You must be signed in to change notification settings

chigwell/moderatefocus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

ModerateFocus

PyPI version License: MIT Downloads LinkedIn

Analyzing Community Moderation and Platform Policies

Overview ModerateFocus is a Python package that helps analyze user-submitted text queries about community moderation or platform policies. It uses a Large Language Model (LLM) to generate reasoned explanations and extract key points using pattern matching. This ensures consistent, non-opinionated output, helping users understand common moderation pitfalls without delving into sensitive or subjective areas.

Installation

pip install moderatefocus

Usage

from moderatefocus import moderatefocus

response = moderatefocus(user_input, api_key="your_api_key_here")
print(response)  # Output: list of extracted key points

Parameters

  • user_input: str - the user input text to process
  • api_key: Optional[str] - the API key for LLM7, if not provided, the default ChatLLM7 will be used
  • llm: Optional[BaseChatModel] - the langchain LLM instance to use, if not provided, the default ChatLLM7 will be used

Using Custom LLM Instances You can safely pass your own LLM instance (based on langchain if you want to use another LLM. For example:

from langchain_openai import ChatOpenAI
from moderatefocus import moderatefocus

llm = ChatOpenAI()
response = moderatefocus(user_input, llm=llm)

Using Another LLM You can use another LLM like anthropic or google. For example:

from langchain_anthropic import ChatAnthropic
from moderatefocus import moderatefocus

llm = ChatAnthropic()
response = moderatefocus(user_input, llm=llm)

or google:

from langchain_google_genai import ChatGoogleGenerativeAI
from moderatefocus import moderatefocus

llm = ChatGoogleGenerativeAI()
response = moderatefocus(user_input, llm=llm)

API Key Rate Limits The default rate limits for LLM7 free tier are sufficient for most use cases of this package. If you need higher rate limits for LLM7, you can pass your own API key via environment variable LLM7_API_KEY or via passing it directly like moderatefocus(user_input, api_key="your_api_key_here"). You can get a free API key by registering at https://token.llm7.io/.

Author Eugene Evstafev ([email protected])

GitHub https://github.com/chigwell

License MIT License

About

A new package that analyzes user-submitted text queries about community moderation or platform policies (like why small voting projects get flagged) and returns structured insights. It uses an LLM to

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages