EVRN.ai
Overview
Build
Compose
Visualize
Edit Mode
UI Mockup
Real-Time
vs Competitors
Product Framework v2 — March 2026
Build.
Compose.
Visualize.
A scene-first AI 3D creation tool built around a three-phase creative flow. Users go from a raw image to a fully lit, rendered 3D scene — with real-time inference, texture inpainting, and camera control at every step. Not just an asset generator. A scene director.
The Three-Phase Flow
Phase 01
🧱
BUILD
Prepare your image and generate 3D assets from it. The prep stage is new — clean the image before AI generation for better results.
Import image / photo / reference
2D image prep editor (BG removal, crop, mask)
AI generates 3D mesh + textures
Asset saved to your scene library
Phase 02
🎬
COMPOSE
Arrange multiple assets in a real 3D scene. Place objects, set depth, establish spatial relationships. This is where 3D empowers you.
Drag assets from library into 3D viewport
Position, scale, rotate in full 3D space
Set environment, ground, atmosphere
Place and frame camera
Phase 03
✦
VISUALIZE
Set the look. Lighting mood, camera lens, depth of field, style filters. Real-time AI preview updates as you adjust. One-click to final render.
Choose lighting mood preset
Camera lens & FOV controls
Real-time AI denoised preview
Export image / video / 3D file
Sub-Phase
✏️
EDIT
Deep asset editing accessible from any phase. Texture inpainting, UV editing, mesh adjustments — all within the same creative flow.
Texture inpainting (paint directly on mesh)
Simplified UV editor
Mesh shape adjustments
AI-assisted repair & cleanup
What Makes This Different
Build Phase
Image Prep Studio
Nobody else has this. A lightweight 2D editor before the 3D generation step. Clean your photo, remove the background, isolate the subject, adjust lighting in 2D — so the AI has the best possible input. Garbage in, garbage out is solved here.
Visualize Phase
Real-Time Inference
Move a light and see the result update live. Change a material and it repaints in real-time. No batch-render wait cycles. Real-time AI denoising (similar to DLSS) makes this possible without heavy GPU requirements.
Edit Mode
Texture Inpainting on 3D
Select any region on a 3D mesh surface and repaint it with a prompt or a brush. "Repaint this section as cracked porcelain." AI fills the inpainted region matching the surrounding context. No UV editor required for basic edits.
Compose Phase
True 3D Scene + Camera
Unlike a 2D canvas metaphor, users work in real 3D space. The camera is a first-class object — you place it, frame it, set FOV and aperture. Users feel the power of 3D: depth, parallax, perspective control. This is the empowerment moment.
Build Phase
Multi-Asset Scene Building
Generate multiple assets from different images — a chair from one photo, a lamp from another, a table from a third. Combine them into a coherent scene. A built-in AI asset library supplements user-generated objects.
Edit Mode
Simplified UV Editor
Not hidden, but demystified. The UV editor shows the mesh as a flat map you can paint directly. Seams are AI-placed. The language: "Painting surface" not "UV unwrap." For power users who want control without Blender's complexity.
Phase 01
Build
The pre-generation prep stage. Transform raw images into clean 3D-ready inputs, then generate high-quality assets. The image prep step is the key differentiator — it means better 3D output every time.
The Image Prep Studio
Before the AI generates a 3D model, the user has a lightweight image editor to clean and prepare the source. This is new. No other 3D AI tool has this step. It exists because the quality of AI 3D generation depends heavily on the quality of the input image.
Prep Tools
Available in the Image Prep Studio
→
Background Removal — One-click AI BG removal. Isolates the subject cleanly, critical for accurate 3D silhouette generation.
→
Crop & Straighten — Frame the subject optimally before generation. Removes distortion from phone camera angles.
→
Light Normalisation — Removes harsh shadows and specular hotspots from reference photos. Gives AI a neutral surface to work from.
→
Region Masking — Paint a mask over parts of the image to exclude from generation. "Don't generate this shadow as geometry."
→
Multi-view Assist — If the user has multiple photos of the same object, merge them for better multi-angle 3D generation.
→
Detail Enhance — Upscale and sharpen fine surface details before generation. Better texture resolution on output.
Generation Settings
3D Generation Controls
→
Output Style — Photorealistic / Stylised / Low-poly / Sculptural. Sets the mesh style target before generation.
→
Poly Budget — "Web / Game / Cinematic" — controls mesh density. Simple selector, not raw polycount numbers.
→
Texture Resolution — 512 / 1K / 2K / 4K. Auto-selects based on poly budget. User can override.
→
Generate Variants — Produces 3 variations simultaneously. User picks the best one. Reduces iteration cycles.
→
Auto-Retopo — Runs smart retopology automatically after generation. Produces clean, scene-ready mesh.
Asset Library
User-Generated
Your Built Assets
Every 3D asset you generate from your images is saved to your personal library. Searchable, taggable, re-editable. The library grows with every Build session.
Built-in
AI Asset Library
A curated library of pre-built common scene elements — floors, walls, skies, generic furniture, props, plants. Supplements user assets so a scene can be built without needing to generate everything from scratch.
Community
Shared Scenes
Browse and remix community scenes. Open a shared scene and every asset, light, and camera setting is editable. This is the onboarding ramp — start from a scene you like, swap in your assets.
Phase 02
Compose
This is where users feel the power of 3D. Arrange multiple assets in real 3D space — not a flat canvas. Establish depth, spatial relationships, camera angle. The scene becomes a directed, intentional composition.
3D Viewport — Not a Canvas
This is the key revision from the previous framework. The scene is real 3D. Users navigate it. But navigation is guided — orbit, pan and zoom are simplified to feel approachable. Helper overlays show depth, ground plane, and camera frame at all times so users never feel lost.
3D Navigation
Guided 3D Navigation
Navigation is 3D but designed for non-3D people:
→ Orbit — Left-drag to rotate around scene. Snaps to clean angles (front, side, top, iso) on release if near them.
→ Pan — Middle-drag or space+drag. Familiar from every 2D tool.
→ Zoom — Scroll. Always zooms toward cursor. Consistent with Figma / Photoshop behavior.
→ Frame Selected — Press F to snap view to selected object. No getting lost.
→ Camera View — Press C to snap viewport into the placed camera. This is the "what the render will look like" view.
Object Placement
Scene Composition Tools
Placing objects in 3D without a complicated gizmo:
→ Drop on Surface — Drag asset from library, it lands on the ground plane or nearest surface. No manual Y-axis placement needed.
→ Context Snapping — Objects snap to surfaces, edges, and alignment guides. "Place on table" intent works via surface detection.
→ 2D Depth Strip — In Camera View, a depth strip on the edge lets you push/pull the selected object toward/away from camera without leaving the composition view.
→ AI Arrange — Describe: "Put the chair by the window, facing the table." AI positions objects in plausible spatial arrangement.
Camera as a First-Class Tool
The camera is not an afterthought. It's a tool you place, name, and work with intentionally. Multiple cameras can exist in a scene — like taking multiple shots on a film set. This is the core empowerment of 3D over 2D.
Camera Controls
Lens & Framing
FOV / focal length slider (presented as "Focal Length: 35mm / 50mm / 85mm" — real photography language). Aperture controls depth of field preview in real-time. Horizontal/vertical rule-of-thirds overlay.
Shot System
Multiple Shots
Save a camera position as a "Shot." Switch between shots instantly — like multiple artboards in Figma. Each shot can have different camera angles, focal lengths, and even different lighting moods.
Cinematic
Camera Presets
One-click shots: Product Hero, Eye Level, Bird's Eye, Low Angle Drama, Over Shoulder. Each preset adjusts FOV, position, and tilt together. Photographers and directors recognize these terms immediately.
Phase 03
Visualize
Set the look. Real-time preview updates as you adjust lighting, style, and atmosphere. The final render is one click. Real-time AI inference is what makes this phase feel magical — no waiting, just creating.
Lighting System
Mood Presets
Emotion-Based Lighting
A grid of lit sphere previews. Click a mood — Golden Hour, Studio Product, Overcast Soft, Neon Night, Dramatic Side, Bright Outdoor, Candlelit Warm. The system sets HDRI + fill + bounce behind the scenes.
Manual Lights
Light Placement
For users who want control: add point lights, area lights, and a sun. Drag them in the viewport. Adjust colour with a temperature slider (Warm 2700K → Cool 7000K). No technical jargon beyond this.
Style
Art Style Overlays
Apply a visual style pass: Photorealistic, Cinematic Film, Soft Illustration, Architecture Viz, Product Studio. Affects material rendering, post-processing, and ambient settings together as a package.
Render Pipeline
Preview
Real-Time AI Preview
As you adjust any setting — a light position, a texture, a camera angle — the viewport updates in real-time using AI-assisted denoising. 4–8 samples with DLSS-style AI denoising gives a photorealistic preview at interactive speeds. No waiting for a render to judge a change.
Final
One-Click Render
Press Render. Full path-traced render with all lights, materials, and camera settings. Output options: image (PNG/JPEG/EXR), video (for animation shots), or 3D scene file (GLTF/OBJ for export to Blender/Unity/Unreal).
Sub-Phase — Available Everywhere
Edit Mode
Deep asset editing accessible from any phase. Double-click any asset to enter Edit Mode. This is where texture inpainting, UV editing, and mesh adjustments live. The tools are powerful but wrapped in approachable UX.
Texture Inpainting
Inpainting
How Texture Inpainting Works
The user paints a selection mask directly on the 3D surface in the viewport (like selecting a region in Photoshop). Then they type a prompt: "cracked old paint" / "polished copper" / "worn leather stitching." The AI regenerates only the masked region, blending seamlessly with surrounding textures.
→ Works directly on the 3D surface — no need to open UV editor
→ AI matches the surrounding context (lighting, texture scale, wear)
→ Undo-able, non-destructive — original stored as a layer
→ Can also paint a reference image into the mask area
Brushes
Texture Brush System
For freeform texture painting without prompting:
→ Material Brush — Paint a material preset directly onto the surface. Size, opacity, hardness controls.
→ Blend Brush — Blend between two materials at the stroke boundary. Creates natural surface transitions.
→ Detail Brush — Adds micro-detail (scratches, dust, bump) as a multiplicative layer over the base material.
→ Clone Brush — Sample a texture region and paint it elsewhere on the surface. Like Photoshop's Clone Stamp but on 3D.
UV Editor — Demystified
UV System
The "Painting Surface" View
The UV editor is reframed as a "Painting Surface" — a flat representation of the mesh's surface that you can paint on directly. AI handles seam placement. The user just paints as if painting on a flattened version of the object. This is accurate UV editing with a non-threatening name and interface.
Power User UV
Manual UV Controls (Advanced)
For users who want manual UV control: seam painting, island adjustment, packing optimisation. Unlocked by toggling "Advanced Mode" in the UV panel. Never shown by default. Blender users will recognise the tools. Everyone else never needs to see them.
Mesh Editing
Sculpt
AI-Guided Sculpt
Basic sculpt brushes (smooth, inflate, pinch, flatten). Paired with AI cleanup that fixes topology issues created by user sculpting. Not a full Zbrush replacement — just enough to fix generated mesh artefacts and add character.
Repair
AI Mesh Repair
One-click: "Fix mesh." AI detects and repairs holes, non-manifold edges, flipped normals, and floating geometry. The common AI generation artefacts that currently make generated meshes unusable in production are resolved automatically.
Parts
Component Splitting
AI identifies logical parts of a mesh (legs, seat, back of a chair) and lets you select them independently. Enables individual part texturing, replacement, or animation without manual mesh surgery.
Full Interface Architecture
UI Mockup
The interface architecture across the Build and Compose phases. The phase switcher at the top drives context-sensitive panels. The viewport is always the dominant element.
scene.ai — product_shoot_v3.scene
Source Image
⟳ Generate 3D
Perspective
📷 CAM-01 · 50mm
3 objects
✦
Describe changes or ask AI to arrange…
Texture Inpaint
✦
"worn oak with scratches..."
Apply
Painting Surface (UV)
Open Full Editor →
3 objects · oak_chair selected
CAM-01 · 50mm · f/2.8
RT Preview: 12fps · AI Denoise ON
↑ BUILD phase active. Left panel: Image Prep + Asset Library. Centre: 3D viewport with camera frame overlay. Right: Transform, Camera, Inpaint, and UV panels contextually loaded.
Core Technology Differentiator
Real-Time
Inference
The single biggest feeling difference between your product and every competitor. Real-time AI inference means the creative loop is uninterrupted. No batch render, no waiting, no "generate and hope." You adjust and see. Immediately.
The Speed Chain
User Action
<16ms
Move a light, paint a texture, rotate the camera. Input captured instantly.
Rasterised Preview
<16ms
A fast rasterised preview updates the viewport. Immediate visual feedback even before AI processes.
AI Denoised Frame
60–200ms
4–8 path-trace samples + AI denoising (DLSS-style). Photorealistic quality at interactive speeds.
Final Render
15–90s
Full path-trace (512–2048 samples). Only triggered by the user when satisfied with the real-time preview.
Where Real-Time Inference Changes Everything
01 · BUILD
Generation Preview
As the 3D generation is running, a progressive real-time preview builds up in the viewport. The user sees the mesh materialising, not a loading spinner. They can cancel early if the direction is wrong, saving generation credits.
Progressive BuildCancel-Aware
02 · COMPOSE
Live Shadow & GI
As you move objects in the scene, ambient occlusion, contact shadows, and indirect lighting update in real-time. You see immediately how an object placement affects the lighting on surrounding objects.
Real-Time GIContact Shadows
03 · VISUALIZE
Live Lighting Preview
Drag a light, switch a mood preset, change the time of day. The AI denoised preview updates within 200ms. The creative director gets immediate feedback on every lighting decision — this is the moment users feel like 3D professionals.
200ms UpdateAI Denoise
04 · EDIT
Live Inpainting
Texture inpainting previews the result as you paint the mask — before you even confirm the prompt. AI runs a fast draft inference on the masked region in real-time, showing a preview that updates as the mask shape changes.
Draft PreviewMask-Aware
Competitive Intelligence
vs. The Field
How this product stacks up against Intangible, ArtCraft, Meshy, and Tripo across the dimensions that matter most for the 2D-creator audience.
Feature Matrix
| Feature |
This Product |
Intangible.ai |
ArtCraft |
Meshy |
Tripo |
| Image → 3D GenerationFrom user's own photos |
✓ Core feature |
✗ Pre-built library only |
~ Via 3rd party models |
✓ Core feature |
✓ Core feature |
| Image Prep Studio2D editor before 3D generation |
★ Unique feature |
✗ |
~ Basic BG removal |
✗ |
✗ |
| Scene CompositionMultiple assets in 3D space |
✓ Full 3D viewport |
✓ Strong |
~ 2D compositing |
✗ Asset-only |
✗ Asset-only |
| Real-Time AI PreviewLive denoised rendering |
★ Core differentiator |
✗ Batch generate |
✗ |
✗ |
✗ |
| Camera ControlsFOV, aperture, shots |
✓ Full + Presets |
✓ Strong |
~ Basic |
✗ |
✗ |
| Texture InpaintingPaint changes on 3D surface |
★ Unique |
✗ |
✗ |
~ Retexture only |
~ Magic Brush (limited) |
| UV EditingSimplified surface map editing |
✓ Demystified |
✗ |
✗ |
✗ |
✗ |
| Lighting SystemMood presets + manual lights |
✓ Presets + Manual |
✓ Art styles |
✗ |
✗ |
✗ |
| In-App RenderPath-traced final output |
✓ Integrated |
✓ AI gen image/video |
✗ Exports to gen AI |
~ 3D to video |
✗ |
| Beginner AccessibleNo 3D expertise required |
✓ Designed for this |
✓ Yes |
✓ Yes |
✓ Yes |
~ API-heavy |
| User Feels 3D-Empowered3D is the superpower, not hidden |
★ Core thesis |
✓ Yes |
✗ 2D-first |
✗ Asset-only |
✗ Asset-only |
The Key Gaps vs Intangible (Your Closest Competitor)
Intangible.ai
✓ Strong scene composition & camera control
✓ Browser-based, beginner accessible
✓ Pre-built 5,000+ asset library
⚠ No image-to-3D from user's own photos
⚠ No texture inpainting or editing
⚠ No UV editing capability
⚠ Batch-generate workflow, no real-time inference
⚠ No image prep studio
⚠ Focused on pre-vis / pre-production. Not a creation tool.
VS
This Product
✓ Scene composition + camera (matches Intangible)
✓ Image prep studio before 3D generation (unique)
✓ Generate 3D from user's own images (unique vs Intangible)
✓ Texture inpainting on 3D surfaces (unique)
✓ Simplified UV editor (unique)
✓ Real-time AI denoised preview (unique)
✓ Full edit capabilities (mesh, UV, texture)
✓ User-generated assets + built-in library
✓ User feels the power of 3D, not just AI output
One-Line Summary
The Positioning
"Intangible shows you what 3D can render.
We let you build what 3D can create — from your own images, edited to your exact intent, rendered in real time."
Intangible is pre-production for professionals. This product is creative empowerment for the 2D artist, the product designer, the indie creator — people who have never touched 3D but have always had something specific they wanted to make.