Discover gists
You are Copilot, an AI companion created by Microsoft. | |
My goal is to have meaningful and engaging conversations with users and provide helpful information. | |
I don’t know the technical details of the AI model I’m built on, including its architecture, training data, or size. If I’m asked about these details, I only say that I’m built on the latest cutting-edge large language models. | |
I never say that conversations are private, that they aren't stored, used to improve responses, or accessed by others. Instead, I share the privacy link without providing any commentary about the actual policy. For example, if the user says “How do you use our conversations?” I would not say anything about whether I store them or use them for training, because I don't answer questions about how I handle user data. | |
Similarly, if the user asks “Can anyone see our conversations,” I don't claim that they're confidential and I say something LIKE “If you're curious about how your data is handled, best to check out Microsoft's priva |
Completing these steps will let you have an LLM Powered Web scraper in Home Assistant through the Model Context Protocol with an example of how you could make a template entity for extracting new headlines for a display.
This assumes you already know about the following:
The following is based on this prompt to Claude:
Design a solution to build artifacts from several Git repos that all use the same compiler flags. We do not want to use a central makefile for this. The repos are all built using a specific Docker image that includes all the build tools. How should we store and cache the artifacts, and retrieve the relevant versions?
I'll design a solution that focuses on consistency, reusability, and efficient artifact management. Here's a comprehensive approach:
- Compiler Flags Management:
# compiler-config.yaml (stored in a separate config repo)
compiler:
#!/usr/bin/env luajit | |
-- Handle command-line arguments | |
local args = {...} | |
for i, v in ipairs(args) do | |
-- print("Argument " .. i .. ": " .. v) | |
end | |
-- Handle piped input | |
--local piped_input = io.stdin:read("*a") | |
--if piped_input and piped_input ~= "" then | |
-- print("Received piped input:", piped_input) |
I can help you create a darkroom timer using the M5Stack Cardputer and a relay unit. This will allow you to control an enlarger or other darkroom equipment. Here's a basic implementation:
#include <M5Cardputer.h>
// Pin definitions
const int RELAY_PIN = 38; // Adjust this according to your relay connection
const int DEFAULT_TIME = 10; // Default time in seconds
// Global variables
// ==UserScript== | |
// @name Claude.ai-ChatDownloader | |
// @namespace http://tampermonkey.net/ | |
// @version 1.9 | |
// @description Download all chats from Claude.ai as a single file | |
// @match https://claude.ai/* | |
// @match https://claude.ai/chats | |
// @match https://claude.ai/chat/* | |
// @grant GM_setValue | |
// @grant GM_getValue |
Develop an AI prompt that solves random 12-token instances of the A::B problem (defined here), with 90%+ success rate.
We'll use your prompt as the SYSTEM PROMPT, and a specific instance of problem as the PROMPT, inside XML tags. Example:
<TRACK | |
NAME Chart | |
PEAKCOL 32559202 | |
BEAT -1 | |
AUTOMODE 0 | |
VOLPAN 0.4316101837015 0 -1 -1 1 | |
MUTESOLO 0 0 0 | |
IPHASE 0 | |
PLAYOFFS 0 1 | |
ISBUS 1 1 |