lens-studio-user-context
Lens Studio User Context — Reference Guide
This guide covers Snapchat social APIs available in Lens Studio: user identity, Bitmoji avatars, friends, social sharing (Dynamic Response), and leaderboards.
UserContextSystem
UserContextSystem provides information about the current Snapchat user.
const userContextSystem = global.userContextSystem
// Get the current user's SnapchatUser object
const currentUser: SnapchatUser = userContextSystem.getCurrentUser()
print('Display name: ' + currentUser.displayName)
// Check if the user has a Bitmoji
if (currentUser.hasBitmoji()) {
loadBitmoji2D(currentUser)
}
Bitmoji 2D (Sticker)
Load a user's Bitmoji as a flat 2D texture:
const bitmojiModule = require('LensStudio:BitmojiModule')
const remoteMediaModule = require('LensStudio:RemoteMediaModule')
function loadBitmoji2D(user: SnapchatUser): void {
// Step 1: Request the Bitmoji 2D resource URL
const resource = bitmojiModule.requestBitmoji2DResource(user)
// Step 2: Fetch the resource and apply it as a texture
remoteMediaModule.loadResourceAsImageTexture(
resource,
(texture: Texture) => {
// Apply the texture to a screen image or material
screenImage.mainPass.baseTex = texture
print('Bitmoji 2D loaded for: ' + user.displayName)
},
(error: string) => {
print('Failed to load Bitmoji 2D: ' + error)
}
)
}
Bitmoji 3D
Load a user's full 3D Bitmoji avatar into the scene:
function loadBitmoji3D(user: SnapchatUser, parent: SceneObject): void {
// Step 1: Request the 3D resource
const resource = bitmojiModule.requestBitmoji3DResource(user)
// Step 2: Fetch and instantiate as a scene object
remoteMediaModule.loadResourceAsSceneObject(
resource,
(bitmojiObject: SceneObject) => {
bitmojiObject.setParent(parent)
bitmojiObject.getTransform().setLocalPosition(vec3.zero())
print('Bitmoji 3D loaded for: ' + user.displayName)
// Animate: find the AnimationMixer on the loaded Bitmoji
const animator = bitmojiObject.getComponent('Component.AnimationMixer')
if (animator) {
animator.startAllAnimations()
}
},
(error: string) => {
print('Failed to load Bitmoji 3D: ' + error)
}
)
}
Playing a specific animation on a Bitmoji 3D
const animator = bitmojiObject.getComponent('Component.AnimationMixer')
// Play a named animation clip
animator.setClipWeight('wave', 1.0)
animator.setClipEnabled('wave', true)
// Stop
animator.setClipEnabled('wave', false)
Bitmoji Head (Live Face Animation)
The Bitmoji Head reacts to the user's face in real time using face tracking.
- Add a Bitmoji Head to the scene (+ → Bitmoji → Bitmoji Head).
- Connect it to the FaceTracking component on your Head object.
- The Bitmoji head will mirror the user's expressions (mouth open, eyebrows, etc.) automatically.
To access the Bitmoji head in script:
const bitmojiHead = bitmojiHeadObject.getComponent('Component.BitmojiHeadComponent')
bitmojiHead.faceTracker = faceTrackingComponent
Friends API
Listing friends with FriendsComponent
// Add a FriendsComponent to your scene object in the inspector,
// then access it from script:
const friendsComponent = this.sceneObject.getComponent('Component.FriendsComponent')
// Get the list of friends (up to the platform limit)
friendsComponent.getFriends((friends: SnapchatUser[]) => {
friends.forEach(friend => {
print('Friend: ' + friend.displayName)
loadBitmoji2D(friend) // load their Bitmoji (reuse function from above)
})
})
FriendInfo Component
Shows a single friend's avatar and display name in a UI panel:
- Add a FriendInfo component to a scene object.
- Set the
friendinput to aSnapchatUserfrom the Friends API.
const friendInfoComponent = this.sceneObject.getComponent('Component.FriendInfoComponent')
friendInfoComponent.friend = selectedFriend // set to a SnapchatUser from getFriends()
Bitmoji Selfies & Stickers Component
Displays a side-by-side or combined Bitmoji image for the user and a friend:
const selfieStickerComponent = this.sceneObject.getComponent('Component.BitmojiSelfiesStickerComponent')
// Set user and friend for a combined selfie Bitmoji sticker
selfieStickerComponent.primaryUser = currentUser
selfieStickerComponent.secondaryUser = selectedFriend
Dynamic Response (Poster / Responder mechanic)
Dynamic Response lets a lens share data and Snap media between a Poster (the person who sends a Snap) and Responders (friends who receive it and tap to open).
Flow
- Poster opens the lens, customises it, and sends/posts a Snap.
- Responder taps the Snap; the lens opens in Responder mode with data from the Poster.
Setup
- Download the Dynamic Response component from the Asset Library.
- Add the
DynamicResponseComponentto a scene object. - Define tappable areas in the inspector (regions the Responder can tap on the received Snap).
Reading Poster data in Responder mode
const dynamicResponse = this.sceneObject.getComponent('Component.DynamicResponseComponent')
dynamicResponse.onResponderActivated.add(() => {
// We are in Responder mode — read data the Poster embedded
// Always sanitise: Poster data is a plain string with no schema enforcement
const raw: string = dynamicResponse.getPosterData('myKey') ?? ''
const posterData = raw.slice(0, 256) // cap length; validate further if driving logic
print('Poster sent: ' + posterData)
// Show the Responder-specific UI
responderUI.enabled = true
posterUI.enabled = false
})
Writing data as the Poster
// Called when the Poster is about to send the Snap
dynamicResponse.setPosterData('myKey', 'Hello Responder!')
dynamicResponse.setPosterData('score', '42')
Checking which mode we're in
if (dynamicResponse.isPoster()) {
print('We are the Poster — show customisation UI')
} else if (dynamicResponse.isResponder()) {
print('We are the Responder — read Poster data')
}
Leaderboard Module
For the full API reference with code examples, see the lens-studio-world-query skill. Summary:
const leaderboardModule = require('LensStudio:LeaderboardModule')
const options = Leaderboard.CreateOptions.create()
options.name = 'MY_LEADERBOARD'
options.ttlSeconds = 0 // 0 = permanent
options.orderingType = Leaderboard.OrderingType.Descending
leaderboardModule.getLeaderboard(options,
(lb) => {
// Submit
lb.submitScore(100, (info) => print('Submitted for ' + info.snapchatUser.displayName), print)
// Retrieve top 10
const r = Leaderboard.RetrievalOptions.create()
r.usersLimit = 10
r.usersType = Leaderboard.UsersType.Global
lb.getLeaderboardInfo(r,
(records, myRecord) => {
records.forEach((rec, i) => print(`#${i+1}: ${rec.snapchatUser?.displayName} — ${rec.score}`))
},
print
)
},
print
)
Common Gotchas
UserContextSystemrequires user consent — if the user hasn't granted the Bitmoji permission,hasBitmoji()returnsfalse. Always check before requesting.- Bitmoji loading is async — always update the UI in the
remoteMediaModulecallback, not immediately after callingrequestBitmoji3DResource. getFriends()list size is platform-limited — don't assume you can get all friends; design for partial lists.- Dynamic Response Poster data is an unvalidated string. Always sanitise it (cap length, check format) before using it to drive UI or game logic — a crafted Snap could inject arbitrary content.
- Dynamic Response tappable areas: if you add tappable areas in the inspector, the platform's Call-to-Action (CTA) button is replaced by the tappable shimmer on the received Snap.
- Dynamic Response is not available in the Lens Studio simulator — test Poster/Responder flow via Snapchat on two devices.
- Leaderboard scores are submitted from the client. There is no server-side score validation — for competitive lenses, use a Snap Cloud Edge Function to verify scores before writing to the leaderboard.
- Leaderboard names are global to the lens — two lenses with the same name string share the same leaderboard. Use unique names.
More from rolandsmeenk/lensstudioagents
spectacles-lens-essentials
Reference guide for foundational Lens Studio patterns on Spectacles — covering the GestureModule (pinch down/up/strength, targeting, grab, phone-in-hand with correct TypeScript API), SIK components (PinchButton, DragInteractable, GrabInteractable, ScrollView), hand-tracking gestures, physics bodies/colliders/callbacks (including audio-on-collision), LSTween animation (position/scale/rotation/color tweens), prefab instantiation at runtime, materials (clone-before-modify), spatial anchors, on-device persistent storage (putString/getFloat), spatial images, and the Path Pioneer raycasting pattern. Use this skill for any Spectacles lens that needs interaction, motion, animation, physics, audio, or persistent local storage — including Essentials, Throw Lab, Spatial Persistence, Spatial Image Gallery, Path Pioneer, Public Speaker, Voice Playback, Material Library, and DJ Specs samples.
9spectacles-cloud
Reference guide for Snap Cloud (Supabase-powered backend) in Spectacles lenses — covering Fetch API setup (requires Internet Access capability in Project Settings), Postgres REST queries with the anon key, Row Level Security policies, Realtime WebSocket subscriptions with correct postgres_changes event format and reconnect-on-sleep patterns, cloud storage uploads of base64 images captured by Spectacles, serverless Edge Functions, and companion web dashboard architecture. Use this skill whenever a lens needs persistent cloud data, needs to share data with a web app in real time, uploads captured images to a bucket, or calls a cloud function — covering Snap Cloud and World Kindness Day samples. Use spectacles-networking for plain REST calls to non-Snap backends, and spectacles-connected-lenses for in-session multiplayer state.
7lens-studio-materials-shaders
Reference guide for materials and shaders in Lens Studio — covering runtime material property changes (clone-before-modify, mainPass.baseColor, mainPass.opacity, mainPass.baseTex), blend modes (Normal/Alpha/Add/Screen/Multiply), depth and cull settings (depthTest, depthWrite, twoSided, cullMode), render order, material variants, assigning textures and render targets, reading and writing RenderTarget textures for post-processing, the graph-based Material Editor node system, custom shader graph nodes, and common shader pitfalls. Use this skill for any lens that needs to change material colors or textures at runtime, implement custom visual effects with shaders, set up post-processing render pipelines, chain render targets, or debug material/blend-mode issues — covering MaterialEditor, Drawing, and HairSimulation examples.
6spectacles-networking
Reference guide for the Lens Studio Fetch API and WebView component in Spectacles lenses — covering InternetModule (Lens Studio 5.9+), Fetch API via internetModule.fetch(Request) with bytes/text/json response handling, performHttpRequest, Internet Access capability, GET/POST requests, custom headers, Bearer auth, polling, timeouts, CORS/HTTPS, WebSocket and RemoteMediaModule for media from URLs, and bidirectional WebView messaging. Use this skill for any lens that calls a REST API, polls a JSON endpoint, loads remote images, embeds a webpage, or talks to a custom backend — including the Fetch sample. Use spectacles-ai for LLM/RSG calls, or spectacles-cloud for Supabase/Snap Cloud integration.
6lens-studio-vfx
Reference guide for Lens Studio's VFX Graph particle system — covering VFXComponent setup (VFX asset vs Component relationship), reading and writing .asset.properties to pass data into a VFX graph at runtime (position, color, float, direction vectors), particle system inspector settings (emitter rate, lifetime, shape, blending, world vs local simulation space), spawning particles from script (trigger emission via properties), sending scene object transforms and screen-space data to VFX, and common VFX bugs (type mismatches on properties, asset null checks). Use this skill whenever a lens needs particle effects, camera-reactive or gesture-driven VFX, attaching effects to face/body tracking anchors, or connecting real-time data (position, audio levels, expression weights) into a VFX graph.
5lens-studio-2d-ui
Reference guide for 2D UI and screen-space interaction in Lens Studio — covering ScreenTransform anchors/offsets/size/pivot and coordinate conversions (localPointToScreenPoint, screenPointToLocalPoint, localPointToWorldPoint), ScreenImage (texture, stretch mode, color tint), Text component (content, font, alignment, color, size), ScreenRegionComponent for defining tap/touch areas, TouchComponent with TouchStartEvent/TouchMoveEvent/TouchEndEvent, tap input for phone lenses, LSTween UI animations (colorTo, moveTo on screen elements), multi-swatch color picker pattern, undo stack pattern, and common gotchas. Use this skill whenever a lens needs a 2D UI panel, tap interaction, on-screen buttons, text labels, color pickers, swipeable menus, or undo/redo — covering Drawing, Quiz, TappableQuestion, MemeSticker, MusicVideo, and HighScore examples.
5