jinni

smat-dev/jinni

4.2

jinni is hosted online, so all tools can be tested directly either in theInspector tabor in theOnline Client.

If you are the rightful owner of jinni and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Jinni is a tool designed to efficiently provide Large Language Models (LLMs) with the context of your projects by consolidating relevant project files.

Try jinni with chat:

Tools

Functions exposed to the LLM to take actions

usage

Retrieves the Jinni usage documentation (content of README.md).

read_context

Reads context from a specified project root directory (absolute path). Focuses on the specified target files/directories within that root. Returns a static view of files with paths relative to the project root. Assume the user wants to read in context for the whole project unless otherwise specified - do not ask the user for clarification if just asked to read context. If the user just says 'jinni', interpret that as read_context. If the user asks to list context, use the list_only argument. Both targets and rules accept a JSON array of strings. The project_root, targets, and rules arguments are mandatory. You can ignore the other arguments by default. IMPORTANT NOTE ON RULES: Ensure you understand the rule syntax (details available via the usage tool) before providing specific rules. Using rules=[] is recommended if unsure, as this uses sensible defaults.

Guidance for AI Model Usage

When requesting context using this tool:

  • Default Behavior: If you provide an empty rules list ([]), Jinni uses sensible default exclusions (like .git, node_modules, __pycache__, common binary types) combined with any project-specific .contextfiles. This usually provides the "canonical context" - files developers typically track in version control. Assume this is what the users wants if they just ask to read context.
  • Targeting Specific Files: If you have a list of specific files you need (e.g., ["src/main.py", "README.md"]), provide them in the targets list. This is efficient and precise, quicker than reading one by one.

Prompts

Interactive templates invoked by user choice

No prompts

Resources

Contextual data attached and managed by the client

No resources