- Playground aka Explorer to learn and experiment with GitHub GraphQL: https://docs.github.com/en/graphql/overview/explorer
- Query: https://docs.github.com/en/graphql/reference/queries#repository
- Objects are identified by arguments. Objects have fields: https://docs.github.com/en/graphql/reference/objects#repository
- How to request GraphQL endpoint: https://docs.github.com/en/graphql/guides/forming-calls-with-graphql
- Octokit GraphQL client: use this for JS instead of own fetch()-based code, as it follows GitHub best practices
- GitHub doesn't allow unauthenticated GraphQL requests. However the authenticated rate-limit is generous at 5000 requests/hour. Hearsay: even raw.githubusercontents.com, though apparently unlimited, limits after 5k requests/hour.
- Question: Are GraphQL queries, because they are not mutative, i.e. are read-only, served from the edge?
-
- ... more at https://docs.github.com/en/graphql
Discover gists
# Copyright: (c) 2024, Jordan Borean (@jborean93) <[email protected]> | |
# MIT License (see LICENSE or https://opensource.org/licenses/MIT) | |
Function New-ScheduledTaskSession { | |
<# | |
.SYNOPSIS | |
Creates a PSSession for a process running as a scheduled task. | |
.DESCRIPTION | |
Creates a PSSession that can be used to run code inside a scheduled task |
Sometimes it is useful to route traffic through a different machine for testing or development. At work, we have a VPN to a remote facility that we haven't bothered to fix for routing, so the only way to access a certain machine over that VPN is via an SSH tunnel to a machine that is reachable over the VPN. Other times, I have used this technique to test internet-facing requests against sites I am developing. It is pretty easy, and if you don't use firefox regularly, you can treat Firefox as your "Proxy" browser and other browsers can use a normal configuration (Although you can also configure an entire system to use the proxy, other articles exists that discuss this potential).
- Open a terminal
I want you to refine this brainstorming document into a prompt for a deep research system that will be tasked with writing a technical spike | |
research document on a software engineering project. The goal of this research is to help guide future agentic coding systems into | |
having a good understanding of the technical landscape around the software the user wants to create. | |
<context> | |
Deep research is a category of product where large language models capable of test time compute are paired with capacities to: | |
- search the web | |
- browse documentatin | |
- read research paper | |
- further refine their research based on their finding |
Before continuing: This guide is currently outdated but I'm working on a new one with upgrading steps included. I'll link it here once it's finished :)
This is a guide that will show you how to setup Plex Media Server with Sonarr, Radarr, Jackett, Overseerr and qBitTorrent with Docker. It is written for Ubuntu 20.04 but should work on other Linux distributions as well (considering supported distributions by Docker). It is also written for people who have some experience with Linux and Docker. If you are new to Docker, I recommend you to read the Docker documentation, and if you are new to Linux, I recommend you to read the Ubuntu documentation.
Now, let's get started!
Please note: This guide was written without considering hardlinking for Sonarr/Radarr. If you want to use hardlinking refer to #Hardlinking
function KillChildren | |
{ | |
Param( | |
[Parameter(Mandatory=$True,Position=1)] | |
[int]$parentProcessId | |
) | |
Get-WmiObject win32_process | where {$_.ParentProcessId -eq $parentProcessId} | ForEach { KillChildren $_.ProcessId } | |
Get-WmiObject win32_process | where {$_.ParentProcessId -eq $parentProcessId} | ForEach { Stop-Process $_.ProcessId 2>$null } | |
} |
Przykładowy plan stworzony dla studentów na kierunku: Inżynieria Bezpieczeństwa
- Charakterystyka danych przestrzennych
- co to ta długość i szerokość geograficzna
- wektor vs raster
- podstawowe operacje na danych przestrzennych:
A "reflector" is a utility that helps users to find the fastest and most up-to-date mirrors for their system. Mirrors are servers that store copies of the Arch Linux packages and updates, which users can download and install on their own machines.
sudo pacman -S reflector
Default
reflector