Generate a structured job application response from a CV and a job offer via a transparent, multi-step LLM pipeline.
Find a file
Glenn 9d6b8e9454 feat(ci): Add CI build status badge to README
This change integrates a CI build status badge into the README,
providing immediate visibility into the project's current build health.
Without this, developers would need to manually check the CI system to
ascertain the build status, increasing friction and potentially delaying
awareness of build failures.

- Added a new build status badge linking to the CI/CD pipeline for the
  'llm-job-apply' project.

Signed-off-by: Glenn <glenux@glenux.net>
2026-01-08 12:56:40 +01:00
spec feat: improve error handling and user experience 2026-01-08 01:06:14 +01:00
src docs(llm): Refine LLM prompt for cover letter generation 2026-01-08 09:04:25 +01:00
.gitignore feat: initial project setup and core LLM application logic 2026-01-08 00:44:30 +01:00
CONTRIBUTING.md docs: enhance project documentation and add license 2026-01-08 08:51:49 +01:00
LICENSE docs: enhance project documentation and add license 2026-01-08 08:51:49 +01:00
Makefile feat: improve error handling and user experience 2026-01-08 01:06:14 +01:00
README.md feat(ci): Add CI build status badge to README 2026-01-08 12:56:40 +01:00

llm.job-apply

Build Status License: MIT

Generate a structured job application response from a CV and a job offer via a transparent, multi-step LLM pipeline.

Status

This repository is a draft/workbench. Expect breaking changes to prompts and CLI flags.

What it does

llm.apply-job reads:

  • a job offer (--joboffer)
  • a CV (--cv)
  • optional “aside” notes (--aside) for context not present in the CV (missing details, constraints, preferences)

It produces six Markdown artifacts and keeps intermediate files for traceability (default output directory: .llm.apply-job/).

Quick start

src/llm.apply-job -j JOB-OFFER.md -c CV.md

How it works (pipeline)

The script runs six sequential steps and writes each result to a stable filename:

  1. 01.cv_analysis.md: CV analysis + inferred themes (hard/soft skills)
  2. 02.offer_analysis.md: offer analysis + inferred expectations/context
  3. 03.match_base.md: explicit + implicit matches
  4. 04.unmatch_base.md: explicit + implicit gaps + honest mitigations
  5. 05.resp_strategy.md: response strategy and structure
  6. 06.resp_final.md: final application letter

Outputs

Default output directory: .llm.apply-job/

01.cv_analysis.md
02.offer_analysis.md
03.match_base.md
04.unmatch_base.md
05.resp_strategy.md
06.resp_final.md

Re-running the script overwrites those step files and prompt files in .llm.apply-job/.tmp/. Other files in the output directory are left untouched.

Usage

src/llm.apply-job -j JOB-OFFER.md -c CV.md [-a ASIDE.md] [-o OUTPUT_DIR] [-l LANG]

Options:

  • -j, --joboffer FILE job offer in Markdown
  • -c, --cv FILE CV in Markdown
  • -a, --aside FILE extra CV context used everywhere the CV is used
  • -o, --output DIR output directory (default: .llm.apply-job)
  • -l, --lang LANG response language (default: fr)
  • -re, --reasoning-model MODEL analysis model (see --help for defaults)
  • -wr, --write-model MODEL writing model (see --help for defaults)

Final-letter style

The final response is guided by a template and strict style rules:

  • direct, concrete, active voice
  • avoid adverbs, doubt, negative-connotation wording, and empty “bullshit” phrasing
  • ends with a scheduling question (assume interest; ask when)

If you want to change the template or rules, edit src/llm.apply-job (step 06.resp_final prompt).

Prerequisites & dependencies

  • Bash
  • A llm CLI in PATH supporting: llm --model MODEL --no-stream and reading the prompt from stdin
  • Optional (pretty output): mdformat, batcat

Examples

Language/output directory:

src/llm.apply-job -j JOB-OFFER.md -c CV.md -l en
src/llm.apply-job -j JOB-OFFER.md -c CV.md -o out/

With aside notes:

src/llm.apply-job -j JOB-OFFER.md -c CV.md -a CV-ASIDE.md

Override models:

src/llm.apply-job -j JOB-OFFER.md -c CV.md \
  -re openrouter/google/gemini-3-pro-preview \
  -wr openrouter/google/gemini-3-flash-preview

Installation

make install-local   # ~/.local/bin/llm.apply-job
make install-system  # /usr/local/bin/llm.apply-job (requires sudo)

After install you can run llm.apply-job instead of src/llm.apply-job.

Testing

Tests run offline using a mocked llm binary:

sh spec/run.sh

Project layout

src/              # scripts
spec/             # bash tests + fixtures + mocks (offline)
vibes.reference/  # reference scripts

Contributing

See CONTRIBUTING.md.

Privacy & safety

This tool sends your CV/offer content to the configured LLM provider via llm. Review your provider settings and avoid sharing secrets.

Contributors

Thanks to everyone who contributes. If youd like to be listed here, add your name/handle in alphabetical order in a PR.

  • (add your name/handle)

License

MIT. See LICENSE.