Add more books

This commit is contained in:
yhirose
2026-03-09 19:30:18 -04:00
parent e41ec36274
commit 1f34c541b0
9 changed files with 238 additions and 8 deletions

View File

@@ -1,8 +1,95 @@
---
title: "Cookbook"
order: 1
order: 0
---
This section is under construction.
A collection of recipes that answer "How do I...?" questions. Each recipe is self-contained — read only what you need.
Check back soon for a collection of recipes organized by topic.
## Client
### Basics
- Get the response body as a string / save to a file
- Send and receive JSON
- Set default headers (`set_default_headers`)
- Follow redirects (`set_follow_location`)
### Authentication
- Use Basic authentication (`set_basic_auth`)
- Call an API with a Bearer token
### File Upload
- Upload a file as multipart form data (`make_file_provider`)
- POST a file as raw binary (`make_file_body`)
- Send the body with chunked transfer (Content Provider)
### Streaming & Progress
- Receive a response as a stream
- Use the progress callback (`set_progress`)
### Connection & Performance
- Set timeouts (`set_connection_timeout` / `set_read_timeout`)
- Set an overall timeout (`set_max_timeout`)
- Understand connection reuse and Keep-Alive behavior
- Enable compression (`set_compress` / `set_decompress`)
- Send requests through a proxy (`set_proxy`)
### Error Handling & Debugging
- Handle error codes (`Result::error()`)
- Handle SSL errors (`ssl_error()` / `ssl_backend_error()`)
- Set up client logging (`set_logger` / `set_error_logger`)
## Server
### Basics
- Register GET / POST / PUT / DELETE handlers
- Receive JSON requests and return JSON responses
- Use path parameters (`/users/:id`)
- Set up a static file server (`set_mount_point`)
### Streaming & Files
- Stream a large file in the response (`ContentProvider`)
- Return a file download response (`Content-Disposition`)
- Receive multipart data as a stream (`ContentReader`)
- Compress responses (gzip)
### Handler Chain
- Add pre-processing to all routes (Pre-routing handler)
- Add response headers with a Post-routing handler (CORS, etc.)
- Authenticate per route with a Pre-request handler (`matched_route`)
- Pass data between handlers with `res.user_data`
### Error Handling & Debugging
- Return custom error pages (`set_error_handler`)
- Catch exceptions (`set_exception_handler`)
- Log requests (Logger)
- Detect client disconnection (`req.is_connection_closed()`)
### Operations & Tuning
- Assign a port dynamically (`bind_to_any_port`)
- Control startup order with `listen_after_bind`
- Shut down gracefully (`stop()` and signal handling)
- Tune Keep-Alive (`set_keep_alive_max_count` / `set_keep_alive_timeout`)
- Configure the thread pool (`new_task_queue`)
## TLS / Security
- Choosing between OpenSSL, mbedTLS, and wolfSSL (build-time `#define` differences)
- Control SSL certificate verification (disable, custom CA, custom callback)
- Use a custom certificate verification callback (`set_server_certificate_verifier`)
- Set up an SSL/TLS server (certificate and private key)
- Configure mTLS (mutual TLS with client certificates)
- Access the peer certificate on the server (`req.peer_cert()` / SNI)
## SSE
- Implement an SSE server
- Use event names to distinguish event types
- Handle reconnection (`Last-Event-ID`)
- Receive SSE events on the client
## WebSocket
- Implement a WebSocket echo server and client
- Configure heartbeats (`set_websocket_ping_interval`)
- Handle connection close
- Send and receive binary frames

View File

@@ -19,3 +19,4 @@ Under the hood, it uses blocking I/O with a thread pool. It's not built for hand
- [A Tour of cpp-httplib](tour/) — A step-by-step tutorial covering the basics. Start here if you're new
- [Cookbook](cookbook/) — A collection of recipes organized by topic. Jump to whatever you need
- [Building a Desktop LLM App](llm-app/) — A hands-on guide to building a desktop app with llama.cpp, step by step

View File

@@ -0,0 +1,22 @@
---
title: "Building a Desktop LLM App with cpp-httplib"
order: 0
---
Build an LLM-powered translation desktop app step by step, learning both the server and client sides of cpp-httplib along the way. Translation is just an example — swap it out to build your own summarizer, code generator, chatbot, or any other LLM application.
## Dependencies
- [llama.cpp](https://github.com/ggml-org/llama.cpp) — LLM inference engine
- [nlohmann/json](https://github.com/nlohmann/json) — JSON parser (header-only)
- [webview/webview](https://github.com/webview/webview) — WebView wrapper (header-only)
- [cpp-httplib](https://github.com/yhirose/cpp-httplib) — HTTP server/client (header-only)
## Chapters
1. **Embed llama.cpp and create a REST API** — Start with a simple API that accepts text via POST and returns a translation as JSON
2. **Add token streaming with SSE** — Stream translation results token by token using the standard LLM API approach
3. **Add model discovery and download** — Use the client to search and download GGUF models from Hugging Face
4. **Add a Web UI** — Serve a translation UI with static file hosting, making the app accessible from a browser
5. **Turn it into a desktop app with WebView** — Wrap the web app with webview/webview to create an Electron-like desktop application
6. **Code reading: llama.cpp's server implementation** — Compare your implementation with production-quality code and learn from the differences

View File

@@ -1,6 +1,6 @@
---
title: "A Tour of cpp-httplib"
order: 1
order: 0
---
This is a step-by-step tutorial that walks you through the basics of cpp-httplib. Each chapter builds on the previous one, so please read them in order starting from Chapter 1.