Skip to content

Instantly share code, notes, and snippets.

View ynagatomo's full-sized avatar

Yasuhito Nagatomo ynagatomo

View GitHub Profile
@ynagatomo
ynagatomo / TipKitContentView.swift
Created May 24, 2024 09:01
A sample code that uses TipKit in visionOS.
//
// TipKitTestApp.swift
// TipKitTest
//
// Created by Yasuhito Nagatomo on 2024/05/24.
//
// A sample code that uses TipKit in visionOS.
//
// References:
// - Article: Swift with Majid, Discovering app features with TipKit. Basics., 07 May 2024
@ynagatomo
ynagatomo / sample-vm-problem.swift
Created May 20, 2024 00:38
This sample shows Observable object instances with `@Observable and @State` are created many times when a view that has the property is evaluated. So avoid heavy initializations on the Observable classes.
//
// ObservableTwiceInitProblemTestApp.swift
// ObservableTwiceInitProblemTest
//
// Created by Yasuhito Nagatomo on 2024/05/20.
//
// Modified christianselig/sample-view-model-init.swift
// https://gist.github.com/christianselig/d88b1a4d1989b973689ae62d4691162f
//
// This sample shows Observable object instances with `@Observable and @State`
@ynagatomo
ynagatomo / Extension+ModelComponent.swift
Created April 29, 2024 11:48
An extension of ModelComponent, to dump its MeshResource.Model such as positions and normals.
extension ModelComponent {
/// Dump the MeshResource.Model
func dumpMeshResourceModel() {
let printSIMD3Float = { (value: SIMD3<Float>) in
print("(\(value.x), \(value.y), \(value.z)), ", terminator: "")
}
let printSIMD2Float = { (value: SIMD2<Float>) in
print("(\(value.x), \(value.y)), ", terminator: "")
}
@ynagatomo
ynagatomo / RealityRendererTest.swift
Last active October 12, 2024 20:57 — forked from arthurschiller/RealityRendererTest.swift
RealityRenderer Test (visionOS)
//
// RealityRendererView.swift
// RealityRendererTest
//
// Created by Arthur Schiller on 11.01.24.
//
// Change Log: by Yasuhito Nagatomo
// - Added ImageBasedLighting, Mar 2, 2024
// - Added Camera a rotation animation, Mar 2, 2024
@ynagatomo
ynagatomo / RealityDump.swift
Created January 13, 2024 06:15
A simple function that prints out the structure of RealityKit Entities in visionOS.
//
// RealityDump.swift
//
// Created by Yasuhito Nagatomo on 2024/01/13.
//
import Foundation
import RealityKit
import SwiftUI
@ynagatomo
ynagatomo / ContentView.swift
Created November 25, 2023 06:58
Custom Debug Window in visionOS
//
// ContentView.swift
// DebugWindow
//
// Created by Yasuhito Nagatomo on 2023/11/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
@ynagatomo
ynagatomo / ContentView.swift
Created November 16, 2023 08:30
A sample of Head-up Display Component for visionOS
// ContentView.swift
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var showImmersiveSpace = false
@State private var immersiveSpaceIsShown = false
@ynagatomo
ynagatomo / BillboardComponent.swift
Created November 11, 2023 23:56
A sample BillboardComponent for RealityKit in visionOS
// BillboardComponent.swift
import ARKit
import RealityKit
import SwiftUI
public struct BillboardComponent: Component, Codable {
public init() {}
}
@ynagatomo
ynagatomo / ml-converting.txt
Created December 17, 2022 13:33
Console-output during converting CoreML-Stable-Diffusion models
# Converting Stable Diffusion v2 models to CoreML models, Dec 17,2022
#
# MacBook Air/M1/8GB memory, macOS 13.2 beta, Python 3.8
# apple/ml-stable-diffusion v0.1.0
#
(base) ynaga@YasuhitonoMacBook-Air ~ % conda activate coremlsd2_38
(coremlsd2_38) ynaga@YasuhitonoMacBook-Air ~ % cd Temp
(coremlsd2_38) ynaga@YasuhitonoMacBook-Air Temp % mkdir SD2ModelConvChunked
(coremlsd2_38) ynaga@YasuhitonoMacBook-Air Temp % cd SD2ModelConvChunked
(coremlsd2_38) ynaga@YasuhitonoMacBook-Air SD2ModelConvChunked % git clone https://github.com/apple/ml-stable-diffusion
@ynagatomo
ynagatomo / SD2ContentView.swift
Created December 4, 2022 08:56
An iOS app that generates images using Stable-Diffusion-v2 CoreML models.
//
// ContentView.swift
// coremlsd2test
//
// Created by Yasuhito Nagatomo on 2022/12/03.
//
// A sample code using Apple/ml-stable-diffusion library.
// Preparation:
// 1. convert the PyTorch Stable-Diffusion v2 model to coreml models using Apple's tools.
// 2. import the coreml models into the iOS project.