Pixel shader - have a need to reuse

Two days ago I discovеred this pretty framework Wade. It's awesome and allows to do much things.Here some my examples from simple to complex








1. In all examples I hardly used custom pixel shaders for animation. And now I want to reuse shaders, save them as project file, as javascript file in project and apply files to another sprites. But now there is no option... There is real need to do that - save shader content as file with uniforms, and reuse it on other sprites by aplying to sprites shaders list.

Also, multishaders (multipass) on one sprite and on postprocess - will be very useful for some multipass techniques as bluring and another complex effects.

Also, it seems that postprocess on scene doesn't work on sprites with custom shader. May be I'm wrong.

2. There is some problem to deploy source code on server (seganelservice.ru) by copy source code to folder. As is - server show error 404. You can found it on browser console at address http://seganelservice.ru/WebGL/WadeError/ where I simply copied source code, downloaded from Wade.

Errors from console:

/WebGL/WadeError/scene1_night_glows_posteffect_hard_rain.wsc:1 Failed to load resource: the server responded with a status of 404 (Not Found)
wade.js:469 Failed to load JSON data from scene1_night_glows_posteffect_hard_rain.wsc : SyntaxError: Unexpected token < in JSON at position 0
Wade.error @ wade.js:469
wade.js:469 Unable to load json file scene1_night_glows_posteffect_hard_rain.wsc

I solved this problem by renaming scene1_night_glows_posteffect_hard_rain.wsc то *.js and by patching app.js. But error is strange, could you help me - how to solve it right?

Comments 1 to 15 (71 total)

Hi coal and welcome to the forum

Very nice stuff there, I really like the stuff you've done so far - the last one in particular is looking great.

Regarding pixel shaders:

Wade knows when you are re-using the same pixel shaders. If you have two sprites that use the same shader code, Wade will notice and will not create two different copies of the shader - both sprites will end up using the same one. It does this by comparing the hash of the string that you use for the shader code. So even when you think you're duplicating the same code, in fact it is only one shader.

You can also store a shader in a text file, and use Sprite.setPixelShader() to apply it to your sprites.

When I'm using the editor and need to re-use the same shader across different sprites, first I normally create a sprite with my shader (let's call it refSprite). Then, for all the sprites that need the same shader, I call

mySprite.setPixelShader(refSprite.getPixelShader, refSprite.getPixelShaderUniforms)

If you're not using the editor but just the framework, you could do something similar by storing your shaders in a text file and using wade.loadText, or even better store them in a JSON file where you can store both the shader code and the uniforms with their values, and then use wade.loadJSON.

Multi-pass shaders are a complex subject in general - I haven't really figured out how we can do that without making it too complex for Wade users to put all the pieces together. Currently you can do multi-pass shaders by using multiple sprites, one for each pass, and drawing to textures. Not ideal I know, but it works:

secondSprite.inputTexture = 'tempTexture';

If you do that, then in the secondSprite's pixel shader you can use a Sampler2D uniform (called 'inputTexture' in this case) that contains the result of the firstSprite's draw. I hope this makes sense.

I realize that this is a bit complicated, and that's precisely why we have a built-in multi-pass shader for blur, that is the most common case.

You can set the blur level for each layer using wade.setBlur (see here for an example). You can also use the blurred texture of each layer (which is only available when blur is set to something > 0), using a Sampler2D uniform whose value is _layerBlur_X (where X is the layer id). Similarly, you can access each layer's render target (i.e. the normal, non-blurred texture) as _layerRenderTarget_X.

That's pretty advanced stuff, so let me know if you get stuck and need help with it, I'll try to help.

Regarding your second point: I think this must be some configuration problem with your server... it seems it doesn't let you upload (or download)  files with a wsc extension. Are you sure they exist on the server, i.e. they have been successfully uploaded? I think there should be something you can change in the config to stop blocking them. This will depend on what type of server you are using though.


>If you're not using the editor but just the framework,

I use editor currently, because I'm newbie in js, I must work out much js code examples to understand architecture and copy patterns. But soonly I'll migrate to clean framework.

Thank's a lot, i'll try. You've made a great work with editor!


No possibility to set shader on SceneObject. My scenario:

1. Load tree.png

2. Make multisprite SceneObject with 1000 trees (sprites) in random positions by js code

3. Apply shader to SceneObject (to all 1000 trees. For example apply day/night colors, dynamic lightning, scroll texture, etc)

I want to make smth like this http://seganelservice.ru/WebGL/Wade11/ (it's only test, not final scene). But I want to do this from js with procedural forest generation and fast execution performance (currently there is very big texture of 3000*2000, which is too slow in shader scrolling. I know, that I can resize texture in PS, but also there is no random currently in it).

Is there another ways to realize this scenario (make forest randomly from js with one tree picture, and apply shader to all forest)?


Here I've generated such random forest from js - one SceneObject, many sprites on it. But now I can't scroll it, and apply any shader to all SceneObject.

Forest generator by click, but with no scroll (((




It will be 3-4 such SceneObjects in different layers on scene, each layer with different scroll speed (3D emulation as in demos). Question is about performace optimization only - not to scroll 150 sprites, but scroll one SceneObject texture in one shader.


I can't ask right question, because right question has 90% of answer... But I want glue many (100/500/1000) textures programmatically in one/two/three textures screen-size, for performance. And animate them by one/two/three shader, not 100/500/1000 shaders or js position cycle. I'm afraid that many spriteobjects in cycle will be slow (I saw it on phone, while opening some my demos in browser). I want to understand wade pipeline - how sprites send to gpu in framework and what is optimal case...  Currently all is fast, but I'm afraid compex scenes may be slow... What is the fastest way to animate 10000 sprites - is there any doc or topic on forum?



First of all let's try to understand what you mean by "animate".

When you have many sprites using the same texture and the default shader (it doesn't matter if they belong to the same scene object or not), Wade will draw them in batches. This  means that if you have 10000 sprites Wade will only do about 10 WebGL draw calls. Add to this the fact that sprites that are outside the screen are not drawn at all, so it's super fast.

However if you use your own custom shader, it's one draw call for each sprite so 10000 draw calls - quite a bit slower.

So it's important to understand if and why you need a custom shader. If it's an effect that can be done in post process, then using the default shader for your sprites and a custom post process shader on the layer is almost certainly going to be faster.

If you can slow me what effect you're trying to achieve I may be able to help some more 


Thanks a lot, I understand idea about postprocessing. I tried it and it almost works, here code of postprocessing of forest layer (endless texture scroll to imitate scene movement). Trees wind-animated while scrolling, almost all is awesome, except invisible moon on other layer (but may be it's my fault, I examine this case later)

vec2 uv = uvAlphaTime.xy;
vec4 color = texture2D(uDiffuseSampler, uv);
gl_FragColor = color;

But I found in other place main overhead, that impacts performance.

1. Generate 150 sprites in one SceneObject. One tree.gif is about 40Kb.

2. Set to each sprite "wind animate shader"

vec2 uv = uvAlphaTime.xy;

uv.x+=(1.0-uvAlphaTime.y)*sin(posx/500.0+ posy/500.0+sqrt(uvAlphaTime.y)+uvAlphaTime.w/3.0)/0.3)/10.0;


vec4 color = texture2D(uDiffuseSampler, uv);

if (color.b>0.86){    color.w=0.;}
color.w *= uvAlphaTime.z;


3. If I set sprite.alwaysDraw(false); then animation not works and GPU load 15% (why? there is no any animation, no position change, static screen).

If I set sprite.alwaysDraw(true); than animation works and GPU load is 86%(!), as in AAA+ games.


I'm not very well at webgl, glsl and js, but I tried some times ago to work on my GPU with 4 500 000(!) particles with shaders. And with 200 000 geometries with shaders. And with 10000 meshes (3d models) with shaders.


But here we have only 150 trees (only textures, which are fastest way) and such GPU load... May be I'm doing smth wrong? Or may be such overload is because of CPU-GPU textures transitions on each update? I think that reason can be patched very fast in framework by some techniques, but I don't know it...


I strongly know that it can be faster at 10000 times, working with textured particles with threejs. May be, framework needs some patch? Or may be we need some new class? I have some performance realizations on threejs, for example million particles - and can upload them, if you need.


Here I see 50% GPU load on FullHD, but there is less than 150 trees http://seganelservice.ru/WebGL/Wade15/

May be framework can be modified to give user option of redefine way to render/blend sprites? I'm not sure, it's assumption.

May be I need to disable QuadTree and all things in my case will become very fast and GPU load will become less?


Like I said above, if you don't use the default shader then your sprites are not batched. It's one draw call per sprite, which is far from ideal.

However, I don't think that's your problem there. The large amount of overdraw is likely to be the cause of the slow-down. That is, it's not the number of objects that makes things slow down, but rather the number of pixels that you are drawing.

GPU load in itself is not a very useful metric. Using a browser extension such as WebGL Inspector will give you more details about what exactly is affecting the performance of your scene. Trying to guess what slows things down is not a good idea - there are tools that will tell you for sure. From a quick look, I think it's most likely overdraw, but I may be wrong.


1. Please, look here, http://seganelservice.ru/WebGL/OneMillionSprites/

- you can move camera and see 1 000 000 moving sprites. Not 150. No quad tree, no visibility check. Strictly and simple straight render with shaders.

- GPU load is 4-5%. Not 86%

- 8 FPS

2.  The same http://seganelservice.ru/WebGL/HundredThousandSprites/

- 100 000 moving sprites. 

- GPU 20%.

- FPS 60. No breaks.

This is single page code (index.html) + threejs + some common libraries.

Later, I make scene with trees on webgl+threejs and paste here link to compare performance.


And here we can see 10000 particles with shaders, GPU 18%


may be you'll like and realize such methods in framework.


You can already do that with Wade, there is no need to change anything to achieve that. I am suggesting that the main difference with the forest scene is the number of pixels that you are drawing, not the number of objects.

As stated above we do not currently batch sprites using custom shaders, which we could do and it would improve performance in some cases (but it still won't solve the problem with your trees, as it's probably unrelated). There is room for improvement there, but it's very low priority on the list of things to do.


Gio, ok, last try... Strict test:

a) 100 000 small trees in wade. They are animated, but update (animation) hangs with 0.5FPS.


b) 100 000 big trees (more pixels) in threejs. Update is 40-60FPS.

No clipping! No culling! All 100 000 are animated with same custom shader. And the same scene/count/texture. And much more pixels. You can move camera and see it.



You want to say that performance is low priority because all people use default shader... But shaders - are power in games... And performance affects ALL games of ALL users. It is core. Please, show me any wade project, wich display 100 000 animated sprites with shaders or with js. You can't show that.

But It is possible, you can see it on my demos. And it is possible to animate 1 000 000 sprites with no breaks. Pixels count doesn't affect. GPU renders only screen-pixels and nothing else at any frame. And optimal code is very simple, you can see it by "view source" in browser. It is not clean, but very very simple. And very fast.

This is core of framework, I think, not low priority. Currently wade is awesome in js, but very poor in performance. I tried 1500 sprites on snow emulation, but 10000 are hanging in wade. And 10000 snowflakes are flies in threejs...

Spend 3-4 hours - and you can make it much faster... At 100-500 times faster. And ALL users would can use more sprites and beatiful animation and effects.


You can say - no need to show 100 000 sprites... But I saw problem on scrolling ONE big background sprite 4000*4000 with 150 animated trees only... And saw brakes... And saw brakes when scrolling texture in posteffect with 150 animated trees with shaders...

I can't make scene with 150 animated trees and scroll background moonlight - it moves with brakes... And I can't scroll texture with 150 trees, it moves with breaks. And any other animation is slower on such scene, human walks slower. And this is problem, please, hear me...

But solve is the same, as I said - in the core... Only few hours to solve. And all users will have possibility of beautiful shaders graphics.


I think that you update mesh/geometry fully. And on each step textures are passed from CPU to GPU, which is bottleneck. Or, may be you update fully smth else...

But technique is to update only attributes/uniforms of shaders... And that is all animation magick.


This is how I update animation in threejs onRender.

a) uniform "time" in pixel shader

var elapsedMilliseconds = Date.now() - startTime;
var elapsedSeconds = elapsedMilliseconds / 1000.;
uniforms.time.value = elapsedSeconds;

b) attributes in vertex shader (positions/sizes of elephants/sprites):       

                    var pos = geometry.attributes.position.array;
                    for (var i = 0; i < particles; i++) {


                        pos[i * 3] += units[i].deltax;
                        pos[i * 3 + 2] += units[i].deltay;

                    geometry.attributes.position.needsUpdate = true;

           very simple and very fast. And no any other updates. Pipeline CPU-GPU is bottleneck, extremly when updating textures...


Thanks for the suggestion.

Just to clarify, textures are not passed from CPU to GPU on each step, I'm not sure why you think that is the case.

Please understand that this engine / framework is designed to make it easy to make games and web apps. I don't think that letting users interact with vertex shader attributes or changing the rendering pipeline fits with that philosophy.

There will always be some special cases (like drawing thousands of objects with custom shaders), where it isn't as fast as it could be. Luckily though, it is not an issue for 99.99% of games. Batching sprites using the default shaders makes it very fast in the vast majority of real-world scenarios, and keeps the interface very simple. It is not worth making the interface more complicated for a few special cases. Most users do not want to mess with shader attributes, they just want to make games that work in a short time.

However when you want to make something special and this is an issue you have the source code that you can change to fit your specific needs. If, say, you're not happy with the way Sprite.draw works, there is nothing stopping you from redefining Sprite.draw in a way that works better for your project.

Having said that, when I can find a way of batching custom shaders without making the API more complicated to the detriment of the majority of users, I'll do it.

Post a reply
Add Attachment
Submit Reply
Login to Reply