This change integrates a CI build status badge into the README, providing immediate visibility into the project's current build health. Without this, developers would need to manually check the CI system to ascertain the build status, increasing friction and potentially delaying awareness of build failures. - Added a new build status badge linking to the CI/CD pipeline for the 'llm-job-apply' project. Signed-off-by: Glenn <glenux@glenux.net> |
||
|---|---|---|
| spec | ||
| src | ||
| .gitignore | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| Makefile | ||
| README.md | ||
llm.job-apply
Generate a structured job application response from a CV and a job offer via a transparent, multi-step LLM pipeline.
Status
This repository is a draft/workbench. Expect breaking changes to prompts and CLI flags.
What it does
llm.apply-job reads:
- a job offer (
--joboffer) - a CV (
--cv) - optional “aside” notes (
--aside) for context not present in the CV (missing details, constraints, preferences)
It produces six Markdown artifacts and keeps intermediate files for traceability (default output directory: .llm.apply-job/).
Quick start
src/llm.apply-job -j JOB-OFFER.md -c CV.md
How it works (pipeline)
The script runs six sequential steps and writes each result to a stable filename:
01.cv_analysis.md: CV analysis + inferred themes (hard/soft skills)02.offer_analysis.md: offer analysis + inferred expectations/context03.match_base.md: explicit + implicit matches04.unmatch_base.md: explicit + implicit gaps + honest mitigations05.resp_strategy.md: response strategy and structure06.resp_final.md: final application letter
Outputs
Default output directory: .llm.apply-job/
01.cv_analysis.md
02.offer_analysis.md
03.match_base.md
04.unmatch_base.md
05.resp_strategy.md
06.resp_final.md
Re-running the script overwrites those step files and prompt files in .llm.apply-job/.tmp/. Other files in the output directory are left untouched.
Usage
src/llm.apply-job -j JOB-OFFER.md -c CV.md [-a ASIDE.md] [-o OUTPUT_DIR] [-l LANG]
Options:
-j, --joboffer FILEjob offer in Markdown-c, --cv FILECV in Markdown-a, --aside FILEextra CV context used everywhere the CV is used-o, --output DIRoutput directory (default:.llm.apply-job)-l, --lang LANGresponse language (default:fr)-re, --reasoning-model MODELanalysis model (see--helpfor defaults)-wr, --write-model MODELwriting model (see--helpfor defaults)
Final-letter style
The final response is guided by a template and strict style rules:
- direct, concrete, active voice
- avoid adverbs, doubt, negative-connotation wording, and empty “bullshit” phrasing
- ends with a scheduling question (assume interest; ask when)
If you want to change the template or rules, edit src/llm.apply-job (step 06.resp_final prompt).
Prerequisites & dependencies
- Bash
- A
llmCLI inPATHsupporting:llm --model MODEL --no-streamand reading the prompt from stdin - Optional (pretty output):
mdformat,batcat
Examples
Language/output directory:
src/llm.apply-job -j JOB-OFFER.md -c CV.md -l en
src/llm.apply-job -j JOB-OFFER.md -c CV.md -o out/
With aside notes:
src/llm.apply-job -j JOB-OFFER.md -c CV.md -a CV-ASIDE.md
Override models:
src/llm.apply-job -j JOB-OFFER.md -c CV.md \
-re openrouter/google/gemini-3-pro-preview \
-wr openrouter/google/gemini-3-flash-preview
Installation
make install-local # ~/.local/bin/llm.apply-job
make install-system # /usr/local/bin/llm.apply-job (requires sudo)
After install you can run llm.apply-job instead of src/llm.apply-job.
Testing
Tests run offline using a mocked llm binary:
sh spec/run.sh
Project layout
src/ # scripts
spec/ # bash tests + fixtures + mocks (offline)
vibes.reference/ # reference scripts
Contributing
See CONTRIBUTING.md.
Privacy & safety
This tool sends your CV/offer content to the configured LLM provider via llm. Review your provider settings and avoid sharing secrets.
Contributors
Thanks to everyone who contributes. If you’d like to be listed here, add your name/handle in alphabetical order in a PR.
- (add your name/handle)
License
MIT. See LICENSE.