You are forgiven for not knowing about the University of Leiden's Escher and the Droste effect site from 2002, given it shut down in 2024, but they were the first to try filling in the centre of Print Gallery and make the association with the cocoa tins
The main issue is that the image needs to have a high enough resolution to be sharp at all zoom scales. Currently my images are vector graphics that I rasterize depending on the screen resolution.
The Escher Print Gallery requires even larger scales as it uses a zoom factor of 256 across the image (vs 16 for my images)
Others have solved this by either vectorizing the Print Gallery or even rebuilding the scene as 3D signed distance field that can be sampled via ray marching.[1]
The later yields the best result but I did not want to copy it.
Very cool! However, it took me a while to figure out how this was supposed to be used.
For others:
On desktop, at least, you need to click and drag up/down on the left-hand control that says "swipe" with two arrows.
Or click "Autoplay".
laszlokorte -- can I suggest that the up/down icons should also be clickable/holdable? Since they're icons, they look like buttons, not a "swipe area". And also, maybe default to having autoplay on (but still with the controls visible)? Because it was not clear to me, at first, that the whole point of the site is infinite zoom.
- can you build an entire fps shooter game using web gl? how is physics handled? how is collision detection, enemy AI handled? what kind of frame rate can you expect from a counter strike game made in web gl?
- what is the difference between webgl and threejs and babylonjs?
- what is the man hour effort involved for doing something like this assuming you know html, css and js pretty well but not familiar with gamedev
- is open gl the non web version of web gl? or are they completely different?
Yes, you can definitely build an entire fps game using WebGL for rendering. Typically using JavaScript to handle physics, collision, gameplay, etc.
My current WebGL project is rendering high definition terrain, high-poly animated models, thousands of particles, shaders, sound and more over 150 frames-per-second on a 10 year old PC with a RTX 3060. I have found hardware acceleration is often not enabled in the browser, or Windows will default to using the integrated-graphics card when running the browser and that must be changed in the Windows Graphics Settings.
WebGL is a graphics API for talking right to the graphics card, supported by The Browser.
ThreeJS and BabylonJS are libraries that make it easier to render 2D and 3D graphics, both use WebGL and/or WebGPU behind the scenes for rendering.
Development with HTML/CSS/JavaScript and WebGL is my favorite stack to work with. Development is fast, re-loading is quick, errors and debugging is handled directly in the browsers which have great debug information and performance tracking. No compile time and support on lots of devices.
Yes, OpenGL came first. WebGL is a JavaScript binding of a subset of OpenGL functionalities.
- first of all thank you very much for the detailed insight
- as a guy who is very much new to gamedev, threejs etc but not to programming (have a decade of programming experience on backends, android apps etc) i am running into lots of questions as i try to build a mental model of what game dev process looks like
- let us say i wanted to add a player 3d model into this setup, the player can walk, run, crouch, shoot, throw a grenade, go prone, take cover to the wall etc. how do these animations get implemented? what kind of tools are needed for making these animations
- i read that the technique used is called skeletal animation. how are you supposed to think about this? you press w, the character moves forward. in terms of animation that means your character needs to play the standing at one place animation initially and transition to the walking animation as long as the w button is pressed. now you press shift and this walking animation needs to transition to running animation as long as shift is pressed. is this the right way to think about this?
- do we need intermediate animations like "transition from walk to run", "transition from run to walk", "transition from walk to crouch" etc? that would add a lot of states would it not?
- are there LLM tools that you are aware of that are capable of generating these animations?
- i also read there are different file formats like obj, fbx, m3d, glb etc. is the same data stored in these files in a slightly different way like csv vs json or are they completely different?
what kind of tools are needed for making these animations
They are motion-captured and/or they're animated by hand in your 3D editor, e.g. Blender
But much more likely is you won't be making animations, you'll be buying them (or getting them for free). There are many places you can buy these animations already, already rigged to a skeleton.
Some examples (I don't endorse them specifically):
skeletal animation. how are you supposed to think about this
Think about giving direction to an actor. You give high-level instructions to the animation system, and it picks the animation based on rules about what animation to use in what situation, that you already set up. It manages the transition to the next animation, all of which are animations of the skeleton, that the character model adapts to (including physics-based parts of the character like hair and cloth)
Generally speaking, you define animation cycles (e.g. walk cycle, run cycle), and then transition between two different animations that are in phase with each other, but it can be a lot more complicated in order to look more natural.
Unity has the Animation Controller. Unreal has "Motion Matching". Godot has Animation Trees.
do we need intermediate animations
If you want to, yes, but also you can have the game engine interpolate
You haven't even mentioned things like having the character's feet stand realisticly on non-level ground. For that you would use inverse kinematics, but not too much of it because it has a tendancy to go wonky
are there LLM tools
Yes but you'd be better off with animations someone has already created, they tend to look better. Many companies now offering AI-based 3D character generators too.
formats like obj, fbx, m3d, glb etc. the same data stored in these files in a slightly different way
They all have different purposes. You want glTF/glB (same format but in text vs binary) for most purposes
- so let us say for purposes of learning, i wanted to make an fps or a third person shooter (3D) without using unreal, unity, godot or any popular engine out there
- what does the process look like roughly?
- i managed to get c++ running (programmer here with a decade of non gamedev experience) and also added raylib and looked into jolt physics
- got a 3d grid constructed, window created, character model added
- what would be my next bunch of steps?
- should i add animations for each of the player states like walk, run etc?
- should i program interactions like shoot, throw a grenade etc?
- or should I start working on enemy AI like pathfinding A* algorithm with state machine?
- trying to code cooperative mode here so i looked into c++ udp libraries like enet. I am assuming latency and game reconcillation algorithms would be step 1 if you want to build coop from ground up? basically create a server.cpp and a client.cpp and make the game loop work without crashing in cooperative mode on day 0. then worry about adding any interaction at all
- truly trying to comprehend at a high level what day 0 to day 1000 of a game looks like
You'd be committing the classic fallacy of "i'll just work on these tools, then make the game", which while a fun exercise, almost never results in a game being released.
Think about what your ultimate goal is:
- you want to make games: use an existing engine. don't bother with half of the features, focus on whether the game is fun or not. add polish (like character animation transitions) later. use stock assets to begin with.
- you want knowledge to work in games industry but not actually release a game yourself: learn all the bells and whistles of Unreal Engine
- you want to make things that are unlike regular games: develop your own code
- you don't ever intend to release a game, you just want to see how they're made: just read other people's code. Read the Quake engine source code and https://fabiensanglard.net/ as a companion site.
If you're talking about using raylib, that is also a game engine, just a simpler one. We can look in both directions; if this is an exercise exclusively for personal learning and development, why not also learn about what's done for you by that library and by the GPU, etc? Occlusion, rasterisation, depth buffering, perspective-correct texture mapping...
"the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity" https://prog21.dadgum.com/177.html
this is what game engines do - they abstract the essential complexity present in all games, and keep it from infecting the one-time object, your game.
If you want to learn about games, honestly, take a look at existing engines. Take a look at old engines like DOOM or Quake, or even http://cubeengine.com/ and http://sauerbraten.org/ (and their corresponding source code) -- they are very simple compared to modern FPS engines. The Cube engines render geometry using octrees rather than the traditional BSP or recursive portal approach.
I am assuming latency and game reconcillation algorithms would be step 1
Yes, if you intend to make a networked game, write your netcode first, share state with client(s) over a network protocol, even if the network is 127.0.0.0/8
Gamers have opinions about netcode, because it affects how they have to think in order to play the game, so netcode becomes as much a creative endeavour as the level design, graphics, etc.
Every area of endeavour you've mentioned is a fractal of timesuck. They all have their basics and then their advancements, that have been built up by thousands of people over decades.
If you are learning by doing, for god's sake, keep it simple. Make the simplest thing that works. If you're making an FPS, have static geometry and non-animated character models (a 2D sprite will do). Prioritise having the most basic thing working as your goal. Otherwise you will be off in the weeds for years and you'll probably give up having built nothing.
what day 0 to day 1000 of a game looks like
Pick a baseline (whether that's a game engine, or raw language) and then spend the rest of the time making the game: designing gameplay, levels, movement, interactivity, playtesting, feedback, placeholder art, real art... it's about standing on the shoulders of giants, not re-inventing the wheel, and putting your mind and creativity into the new thing, which is your game
> Development with HTML/CSS/JavaScript and WebGL is my favorite stack to work with.
I love this myself, but..
> have great debug information
How do you debug WebGL stuff? I find that to be one of the least debuggable things I've ever done with computers. If there's multiple shaders feeding into one another, the best I can usually come up with is drawing the intermediate results to screen and debugging visually. Haven't been paying too much attention to the space the past 2-3 years though, so I'm wondering if some new tools emerged that make this easier.
The JavaScript debugging is great right out of The Browser these days.
WebGL debugging... it's a combination of how you're doing it, visually, especially for shader-related issues. For API calls, logging gets most things figured out, there is also this: https://github.com/KhronosGroup/WebGLDeveloperTools
You can implement the graphics part of it using WebGL. It's strictly a graphics API for drawing to the screen. But there are specific libraries for eg physics that you can use in your WebGL 2 app, or entire 3D engines (like those you mentioned) targeting WebGL around. Or you can DIY.
> is open gl the non web version of web gl? or are they completely different?
The current version of WebGL, WebGL 2, is like OpenGL ES 3.0.
> what is the man hour effort involved for doing something like this assuming you know html, css and js pretty well but not familiar with gamedev
Almost trivial with Ai. I just started making games with threejs. threejs is pretty much the abstractions you'd end up writing your self if you wanted to use webgl.
The hard part is refining, polish, creating fun mechanics, and creating assets.
You are forgiven for not knowing about the University of Leiden's Escher and the Droste effect site from 2002, given it shut down in 2024, but they were the first to try filling in the centre of Print Gallery and make the association with the cocoa tins
https://web.archive.org/web/20020802200015/http://escherdros...
It's cited by 3b1b themselves, who used Leiden's un-spiralized image to describe the effect.
I did my own version too: https://www.youtube.com/watch?v=xxLfDHe93_M
Why not include the Print Gallery image? Or - if worried about copyright, the ability to load an image.
Why not allow the upload of an arbitrary image?
You would have to know the position of the smaller copy in the uploaded image for the effect to work.
The main issue is that the image needs to have a high enough resolution to be sharp at all zoom scales. Currently my images are vector graphics that I rasterize depending on the screen resolution.
The Escher Print Gallery requires even larger scales as it uses a zoom factor of 256 across the image (vs 16 for my images)
Others have solved this by either vectorizing the Print Gallery or even rebuilding the scene as 3D signed distance field that can be sampled via ray marching.[1] The later yields the best result but I did not want to copy it.
[1]: https://www.shadertoy.com/view/Mdf3zM
Thank you.
what could go wrong
Very cool! I once tried rendering his towers. Mainly used normal canvas drawing though :)
https://bewelge.github.io/escherTower/
Note to other viewers: getting the Escher-esque effect requires tapping a checkbox at the top of the page (easy to miss on a large monitor).
I now updated the default view to already show the Escher effect :)
Very cool! However, it took me a while to figure out how this was supposed to be used.
For others:
On desktop, at least, you need to click and drag up/down on the left-hand control that says "swipe" with two arrows.
Or click "Autoplay".
laszlokorte -- can I suggest that the up/down icons should also be clickable/holdable? Since they're icons, they look like buttons, not a "swipe area". And also, maybe default to having autoplay on (but still with the controls visible)? Because it was not clear to me, at first, that the whole point of the site is infinite zoom.
Thanks for the suggestion! I added a slow initial auto zoom and updated the up/down arrows to work while being pressed.
Mouse scroll wheel works, too
This is awesome! I'd love to be able to upload a custom image too.
stupid question to webgl experts here?
- can you build an entire fps shooter game using web gl? how is physics handled? how is collision detection, enemy AI handled? what kind of frame rate can you expect from a counter strike game made in web gl?
- what is the difference between webgl and threejs and babylonjs?
- what is the man hour effort involved for doing something like this assuming you know html, css and js pretty well but not familiar with gamedev
- is open gl the non web version of web gl? or are they completely different?
Very few questions are stupid, these are not.
Yes, you can definitely build an entire fps game using WebGL for rendering. Typically using JavaScript to handle physics, collision, gameplay, etc.
My current WebGL project is rendering high definition terrain, high-poly animated models, thousands of particles, shaders, sound and more over 150 frames-per-second on a 10 year old PC with a RTX 3060. I have found hardware acceleration is often not enabled in the browser, or Windows will default to using the integrated-graphics card when running the browser and that must be changed in the Windows Graphics Settings.
WebGL is a graphics API for talking right to the graphics card, supported by The Browser. ThreeJS and BabylonJS are libraries that make it easier to render 2D and 3D graphics, both use WebGL and/or WebGPU behind the scenes for rendering.
Development with HTML/CSS/JavaScript and WebGL is my favorite stack to work with. Development is fast, re-loading is quick, errors and debugging is handled directly in the browsers which have great debug information and performance tracking. No compile time and support on lots of devices.
Yes, OpenGL came first. WebGL is a JavaScript binding of a subset of OpenGL functionalities.
- first of all thank you very much for the detailed insight
- as a guy who is very much new to gamedev, threejs etc but not to programming (have a decade of programming experience on backends, android apps etc) i am running into lots of questions as i try to build a mental model of what game dev process looks like
- let us say i wanted to add a player 3d model into this setup, the player can walk, run, crouch, shoot, throw a grenade, go prone, take cover to the wall etc. how do these animations get implemented? what kind of tools are needed for making these animations
- i read that the technique used is called skeletal animation. how are you supposed to think about this? you press w, the character moves forward. in terms of animation that means your character needs to play the standing at one place animation initially and transition to the walking animation as long as the w button is pressed. now you press shift and this walking animation needs to transition to running animation as long as shift is pressed. is this the right way to think about this?
- do we need intermediate animations like "transition from walk to run", "transition from run to walk", "transition from walk to crouch" etc? that would add a lot of states would it not?
- are there LLM tools that you are aware of that are capable of generating these animations?
- i also read there are different file formats like obj, fbx, m3d, glb etc. is the same data stored in these files in a slightly different way like csv vs json or are they completely different?
But much more likely is you won't be making animations, you'll be buying them (or getting them for free). There are many places you can buy these animations already, already rigged to a skeleton.
Some examples (I don't endorse them specifically):
https://characters3d.com/
https://www.unrealengine.com/en-US/blog/game-animation-sampl...
Think about giving direction to an actor. You give high-level instructions to the animation system, and it picks the animation based on rules about what animation to use in what situation, that you already set up. It manages the transition to the next animation, all of which are animations of the skeleton, that the character model adapts to (including physics-based parts of the character like hair and cloth)Generally speaking, you define animation cycles (e.g. walk cycle, run cycle), and then transition between two different animations that are in phase with each other, but it can be a lot more complicated in order to look more natural.
Unity has the Animation Controller. Unreal has "Motion Matching". Godot has Animation Trees.
If you want to, yes, but also you can have the game engine interpolateYou haven't even mentioned things like having the character's feet stand realisticly on non-level ground. For that you would use inverse kinematics, but not too much of it because it has a tendancy to go wonky
Yes but you'd be better off with animations someone has already created, they tend to look better. Many companies now offering AI-based 3D character generators too. They all have different purposes. You want glTF/glB (same format but in text vs binary) for most purposesTry out this FPS game project for the Godot Engine: https://github.com/godotengine/tps-demo
- once again thank you very much for the details
- so let us say for purposes of learning, i wanted to make an fps or a third person shooter (3D) without using unreal, unity, godot or any popular engine out there
- what does the process look like roughly?
- i managed to get c++ running (programmer here with a decade of non gamedev experience) and also added raylib and looked into jolt physics
- got a 3d grid constructed, window created, character model added
- what would be my next bunch of steps?
- should i add animations for each of the player states like walk, run etc?
- should i program interactions like shoot, throw a grenade etc?
- or should I start working on enemy AI like pathfinding A* algorithm with state machine?
- trying to code cooperative mode here so i looked into c++ udp libraries like enet. I am assuming latency and game reconcillation algorithms would be step 1 if you want to build coop from ground up? basically create a server.cpp and a client.cpp and make the game loop work without crashing in cooperative mode on day 0. then worry about adding any interaction at all
- truly trying to comprehend at a high level what day 0 to day 1000 of a game looks like
You'd be committing the classic fallacy of "i'll just work on these tools, then make the game", which while a fun exercise, almost never results in a game being released.
Think about what your ultimate goal is:
- you want to make games: use an existing engine. don't bother with half of the features, focus on whether the game is fun or not. add polish (like character animation transitions) later. use stock assets to begin with.
- you want knowledge to work in games industry but not actually release a game yourself: learn all the bells and whistles of Unreal Engine
- you want to make things that are unlike regular games: develop your own code
- you don't ever intend to release a game, you just want to see how they're made: just read other people's code. Read the Quake engine source code and https://fabiensanglard.net/ as a companion site.
If you're talking about using raylib, that is also a game engine, just a simpler one. We can look in both directions; if this is an exercise exclusively for personal learning and development, why not also learn about what's done for you by that library and by the GPU, etc? Occlusion, rasterisation, depth buffering, perspective-correct texture mapping...
"the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity" https://prog21.dadgum.com/177.html
this is what game engines do - they abstract the essential complexity present in all games, and keep it from infecting the one-time object, your game.
If you want to learn about games, honestly, take a look at existing engines. Take a look at old engines like DOOM or Quake, or even http://cubeengine.com/ and http://sauerbraten.org/ (and their corresponding source code) -- they are very simple compared to modern FPS engines. The Cube engines render geometry using octrees rather than the traditional BSP or recursive portal approach.
Yes, if you intend to make a networked game, write your netcode first, share state with client(s) over a network protocol, even if the network is 127.0.0.0/8Netcode is its own area of study:
- https://developer.valvesoftware.com/wiki/Latency_Compensatin...
- https://developer.valvesoftware.com/wiki/Source_Multiplayer_...
- https://github.com/0xFA11/MultiplayerNetworkingResources
Gamers have opinions about netcode, because it affects how they have to think in order to play the game, so netcode becomes as much a creative endeavour as the level design, graphics, etc.
Every area of endeavour you've mentioned is a fractal of timesuck. They all have their basics and then their advancements, that have been built up by thousands of people over decades.
If you are learning by doing, for god's sake, keep it simple. Make the simplest thing that works. If you're making an FPS, have static geometry and non-animated character models (a 2D sprite will do). Prioritise having the most basic thing working as your goal. Otherwise you will be off in the weeds for years and you'll probably give up having built nothing.
Pick a baseline (whether that's a game engine, or raw language) and then spend the rest of the time making the game: designing gameplay, levels, movement, interactivity, playtesting, feedback, placeholder art, real art... it's about standing on the shoulders of giants, not re-inventing the wheel, and putting your mind and creativity into the new thing, which is your game> Development with HTML/CSS/JavaScript and WebGL is my favorite stack to work with.
I love this myself, but..
> have great debug information
How do you debug WebGL stuff? I find that to be one of the least debuggable things I've ever done with computers. If there's multiple shaders feeding into one another, the best I can usually come up with is drawing the intermediate results to screen and debugging visually. Haven't been paying too much attention to the space the past 2-3 years though, so I'm wondering if some new tools emerged that make this easier.
The JavaScript debugging is great right out of The Browser these days.
WebGL debugging... it's a combination of how you're doing it, visually, especially for shader-related issues. For API calls, logging gets most things figured out, there is also this: https://github.com/KhronosGroup/WebGLDeveloperTools
[dead]
You can implement the graphics part of it using WebGL. It's strictly a graphics API for drawing to the screen. But there are specific libraries for eg physics that you can use in your WebGL 2 app, or entire 3D engines (like those you mentioned) targeting WebGL around. Or you can DIY.
> is open gl the non web version of web gl? or are they completely different?
The current version of WebGL, WebGL 2, is like OpenGL ES 3.0.
> what is the man hour effort involved for doing something like this assuming you know html, css and js pretty well but not familiar with gamedev
Almost trivial with Ai. I just started making games with threejs. threejs is pretty much the abstractions you'd end up writing your self if you wanted to use webgl.
The hard part is refining, polish, creating fun mechanics, and creating assets.
> Almost trivial with Ai.
Not true in the slightest.
> The hard part is refining, polish, creating fun mechanics, and creating assets.
All things that AI cannot, by definition, do. So, not trivial at all with AI.
Fuck AI, man.
Cool, I think? It's unusable on mobile Google Chrome. Pinch to zoom worked for about a split second and now it’s broken
I have not implemented proper multi-touch controls yet. Currently the gizmos need to be used for zooming, paning and rotating.
I will add multi-touch gestures soon.
Nice! Nit: on mobile (ff if it matters) swiping down for some time makes the edges very grainy.
Same with swiping up.
This is awesome. I'd love to see the original escher image scroll through there.
[flagged]