Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

README.md

skillnet-ai

The official Python SDK & CLI for SkillNet — search, install, create, evaluate, and connect AI agent skills.

PyPI version Downloads License: MIT Python 3.9+

Website · GitHub · PyPI


Quick Start

pip install skillnet-ai
from skillnet_ai import SkillNetClient

client = SkillNetClient()  # No API key needed for search & download

# Find a skill
results = client.search(q="pdf", limit=5)
print(results[0].skill_name, results[0].stars)

# Install it
client.download(url=results[0].skill_url, target_dir="./my_skills")

That's it. Search and download are free — no API key, no rate limit.

For create, evaluate, and analyze, set API_KEY (any OpenAI-compatible key). See Configuration.


Features

Feature What it does
🔍 Search Keyword match or AI semantic search across 500+ community skills
📦 Install One-line download from any GitHub skill directory
Create Auto-convert repos, PDFs, conversation logs, or text prompts → structured skill packages
📊 Evaluate Score skills on 5 dimensions: Safety · Completeness · Executability · Maintainability · Cost-Awareness
🕸️ Analyze Map similar_to · belong_to · compose_with · depend_on relationships between skills

Python SDK

Initialize

from skillnet_ai import SkillNetClient

client = SkillNetClient(
    api_key="sk-...",         # Required for create / evaluate / analyze
    # base_url="...",         # Optional: custom LLM endpoint (default: OpenAI)
    # github_token="ghp-..." # Optional: for private repos or higher rate limits
)

Credentials can also be set via environment variables: API_KEY, BASE_URL, GITHUB_TOKEN.

Search

# Keyword search
results = client.search(q="pdf", limit=10, min_stars=5, sort_by="stars")

# Semantic search — find skills by meaning, not just keywords
results = client.search(q="analyze financial PDF reports", mode="vector", threshold=0.85)

if results:
    print(f"{results[0].skill_name}{results[0].stars}")
    print(results[0].skill_url)
Search Parameters
Parameter Type Default Description
q str required Search query (keywords or natural language)
mode str "keyword" "keyword" or "vector"
category str None Filter by category
limit int 20 Max results per request
page int 1 Page number (keyword only)
min_stars int 0 Minimum star count (keyword only)
sort_by str "stars" "stars" or "recent" (keyword only)
threshold float 0.8 Similarity threshold 0.0–1.0 (vector only)

Install

local_path = client.download(
    url="https://github.com/anthropics/skills/tree/main/skills/skill-creator",
    target_dir="./my_skills"
)
print(f"Installed at: {local_path}")

Create

Convert diverse sources into structured skill packages:

# From conversation logs / execution traces
client.create(trajectory_content="User: rename .jpg→.png\nAgent: Done.", output_dir="./skills")

# From a GitHub repository
client.create(github_url="https://github.com/zjunlp/DeepKE", output_dir="./skills")

# From office documents (PDF / PPT / Word)
client.create(office_file="./guide.pdf", output_dir="./skills")

# From a natural language description
client.create(prompt="A skill for web scraping article titles", output_dir="./skills")

All modes auto-generate a complete skill package: SKILL.md + optional scripts/, references/, assets/.

Evaluate

Score any skill on 5 quality dimensions. Accepts local paths or GitHub URLs:

result = client.evaluate(
    target="https://github.com/anthropics/skills/tree/main/skills/algorithmic-art"
)
# {
#   "safety":          {"level": "Good", "reason": "..."},
#   "completeness":    {"level": "Good", "reason": "..."},
#   "executability":   {"level": "Average", "reason": "..."},
#   "maintainability": {"level": "Good", "reason": "..."},
#   "cost_awareness":  {"level": "Good", "reason": "..."}
# }

Analyze Relationships

Discover connections between skills in a local directory:

relationships = client.analyze(skills_dir="./my_skills")

for rel in relationships:
    print(f"{rel['source']} --[{rel['type']}]--> {rel['target']}")
# PDF_Parser --[compose_with]--> Text_Summarizer
# Web_Scraper --[similar_to]--> Data_Extractor

Detects four relationship types: similar_to · belong_to · compose_with · depend_on. Results are saved to relationships.json by default.


CLI

The CLI ships automatically with pip install skillnet-ai — powered by Typer + Rich for beautiful terminal output.

skillnet <command> --help    # Full options for any command

Commands at a Glance

Command What it does Example
search Find skills skillnet search "pdf" --mode vector
download Install a skill skillnet download <url> -d ./skills
create Create from any source skillnet create log.txt -d ./skills
evaluate Quality report skillnet evaluate ./my_skill
analyze Relationship graph skillnet analyze ./my_skills

Search

skillnet search "pdf"
skillnet search "analyze financial reports" --mode vector --threshold 0.85
skillnet search "visualization" --category "Development" --sort-by stars --limit 10

Install

skillnet download https://github.com/anthropics/skills/tree/main/skills/algorithmic-art
skillnet download <url> -d ./my_agent/skills
skillnet download <private_url> --token <your_github_token>

# Use a mirror for faster downloads in restricted networks
skillnet download <url> --mirror https://ghfast.top/

Create

skillnet create ./logs/trajectory.txt -d ./skills          # from trajectory
skillnet create --github https://github.com/owner/repo      # from GitHub repo
skillnet create --office ./docs/guide.pdf                    # from PDF/PPT/Word
skillnet create --prompt "A skill for table extraction"      # from prompt
skillnet create --office report.pdf --model gpt-4o           # custom model

Evaluate

skillnet evaluate ./my_skills/web_search
skillnet evaluate https://github.com/anthropics/skills/tree/main/skills/algorithmic-art
skillnet evaluate ./my_skill --category "Development" --model gpt-4o

Analyze

skillnet analyze ./my_skills
skillnet analyze ./my_skills --no-save     # print only, don't write file
skillnet analyze ./my_skills --model gpt-4o

⚙️ Configuration

Environment Variables

Variable Required For Default
API_KEY create · evaluate · analyze
BASE_URL Custom LLM endpoint https://api.openai.com/v1
GITHUB_TOKEN Private repos / higher rate limits
SKILLNET_MODEL Default LLM model for all commands gpt-4o
GITHUB_MIRROR Faster downloads in restricted networks

search and download (public repos) require no credentials at all.

Recommended mirror: https://ghfast.top/ — set GITHUB_MIRROR or pass --mirror to speed up downloads in restricted networks.

Linux / macOS:

export API_KEY="sk-..."
export BASE_URL="https://..."   # optional

Windows PowerShell:

$env:API_KEY = "sk-..."
$env:BASE_URL = "https://..."   # optional

Or pass credentials directly in code:

client = SkillNetClient(api_key="sk-...", base_url="https://...")

📂 Skill Structure

Every created or downloaded skill follows a standardized layout:

skill-name/
├── SKILL.md          # [Required] YAML metadata + markdown instructions
├── scripts/          # [Optional] Executable Python / Bash scripts
├── references/       # [Optional] Static docs, API specs, schemas
└── assets/           # [Optional] Templates, icons, examples

🤝 Contributing

Contributions are welcome! Feel free to open an Issue or submit a Pull Request.

📄 License

MIT