Sandbox Logo
01 December 2022

November 2022

It was another hackweek at Facepunch this month so we all had a week off from our regular work, but to make up for it we worked extra hard for the rest of the month!
Didn't like how Citizen's irises looked flat, eye've taken an evening to correct that.

Better materials and parallax

Eyes have correct parallax now, their irises will refract further back based on the angles you're seeing them, it's a small change that gives much more life to the character, on top of that I've changed the materials to better conform to the looks we want.

Pupil Dilatation

Citizen's pupils will have puppilary response to the surrounding luminosity now, their irises will dilate on dark environments and will contract when you shine a flashlight with the power of the sun like a jerk.
I moved the publish window into the Project settings panel and made it a multi-step wizard.  Way better, way more coherent, and is going to let us put extra steps in for specific assets in the future (like being able to choose thumbnails and loose resource files).


For this hack week, I experimented with something I had suspected might be possible entirely in Animgraph, given how the character has been set up thus far: changing the height of the character. (And for bonus points, you can combine this with scaling the entity itself!)

We tend to think of animation as a series of changes from a strictly-defined base skeleton, but really, there is nothing that says you can't change relative positions to your heart's content. In a sense, the body parts aren't getting scaled up/down; they're getting longer or shorter.


So there's a new "scale_height" float parameter, which is, by default, clamped to values between 0.8 and 1.2. The theoretical bounds are 0.5-2.0, and things might start looking a bit too odd beyond the range which I've decided should be "officially" "supported" — but there's nothing that prevents you from creating your own copy of the animgraph and setting the default value to 2.0 if you wanted to create a "Slenderman"-style character, for example.

The input delta sequences are inherently changing the height of the pelvis; that effect is countered when ducking, and nullified while sitting.

There are also a few new procedural constraints that take advantage of the twist helpers on "lower limbs" (forearms, calves), keeping their position further down the limb, helping the model retain a better shape. This is unfortunately not possible right now on "upper limbs" (biceps, thighs), barring a minor rig rework and (most likely) a lot of tedious work going around all existing clothing assets.

The upper body compositing for "holdtypes" has only been using model-space rotation + parent-space blending modes, so things worked out reasonably well right away; however, an arm might overextend here and there if it's using IK while the character is shorter.
As for the feet, because they use IK by default, there's no problem with ground contact... but to keep the original animation look (kind of) intact, and reduce the likelihood of overextended IK pops, I put a system in place that effectively scales the position of the IK targets from center. This shortens or lengthens each "stride" and therefore induces foot skating, so the move_* values fed to the graph need to be counter-scaled accordingly... maybe I'll try to figure that out for the next hack week!

We had a contact shadows technique from Alyx that was modified to work also as general ambient occlusion for s&box, but had some problems in edge cases.

I wanted to solve that and try to make it better than any other technique, making it more of a shadowing technique rather than approximating distance around a radius, I always see other games using screen-space based techniques for AO, and the more I see how them just blurring the buffer is the wrong approach.

Our Ambient Occlusion technique was remade and now more closely approaches how ambient shadows are done in the real world, deriving luminance directionality into account, it's still as cheap as the previous technique and miles faster than HBAO.
Ambient shadows from a car will occlude its bottom accurately regardless of the angle, there's also the plus that it gives a similar behaviour as bent normals from baked AO, but dynamically.

New Shapes

More primitive types are also included, Capsule replaced the Spheroid shape that was used previously, Sphere and Cylinder shapes were also added, these also affect reflections.
While Citizens have the vision of an apex predator, unfortunately our monkey eyes can't match them.

Foveated rendering is an optimization technique to adapt render quality to human peripheral vision, but it's usually very stark and noticeable since it traditionally goes in power of two increments with an obvious pattern.

I've wanted to implement it using a different take that closer matches the granularity of our optics to make it unnoticeable while heaping the advantages of it.
You'll get better performance on VR for what's essentially free, I want to give another performance pass on VR soon, but let's focus on flat screens first for the next month.

While working on redoing AOProxies I've taken the chance to also refactor our high-quality reflection technique.
Now it's more accurate, faster and can be adapted into standard shaders more easily, on top of supporting self-reflections. It's now toggleable on the standard Complex shader.
I've updated the package selectors in tools, so you can multi-select and easily remove selected packages. This lets you set a group of addons in the Launcher.
Properties in the Launcher will also retain selected packages when re-opening the package selector.

This month myself and Rohan have been working on a new API for hack week that allows you to create party games that interface with a website. Opening up the possibility to create games that support interaction from phones by submitting multiple choice answers, custom text text answers, drawings, or anything else.

Game Interface

The in-game interface for Juicebox. Currently showing a bunch of random questions with answers.

Mobile & Web Interface

Connect to the server via your smart device or desktop. Allowing you to input answers to questions as well as vote for your favourite answers.

Conceptual in-game Interface

A version of the game which would allow you to play against fellow s&box players in-game and against people on device.

Demo

For hackweek we got a basic gamemode working in S&box using the backend and website we created. It'll start up a session, show the code and wait for players to join, and let the host (first person who joined) start the game when ready. It'll then go through rounds of asking everyone to write up a response to a prompt, and then have everyone vote on which answer was best.


API Usage

The website we built is built to be a generic dumb client. The server basically sends messages to it describing what it should display and prompt the player for, and the website would send events back to the game to be handled. This should allow it to support all kinds of games by prompting for different things with various form controls.

Future Work

This was a fun hack week project and we're looking forward to polishing this up and releasing it so that we can get some party games on S&box. It was quite literally hacked together on my part to get an MVP working within the hack week duration so stay tuned for more.
After looking at a few of the new features that come with .NET 7.0, I decided to upgrade us. This includes C# 11 as well, so it comes with the latest & greatest features.

Read more about it on Microsoft's article here.
More Clothing and Hair! We've also gotten a lot more reskins of assets.




Plus a bunch of reskins of many of the clothing assets - 

So we're now able to pick from a selection of different colours; creating more 
outfit combos.  We've also updated the avatar customization screen to break up clothing into their subcategories and colour variations:


So this month we have...
  • Smart Jacket
  • Puffer Jacket
  • Skater Helmet
  • Flower Hairstyle 
  • Bunch of Reskins for most of the clothing
  • Organisation of the Clothing (Parents and Reskins)
  • Makeup  (Lipstick & Eyeshadow)
  • Shorts
  • Alternate Colours for the Hair
Slightly slower month, since we also had hackweek, which I have been working on a WW2 Military Character which I am hoping to release soon for people to check out and use in their gamemodes.
For a while now we had `Material.FromShader`, however, there wasn't a lot we could do with the actual parameters of the material. Now we have the ability to customize those parameters and create textures on the fly in-game!

public static void SetupCustomMaterial( ModelEntity myModel )
{
Material m = Material.Create( "my_cool_material", "simple" );
m.Set( "g_vColorTint", Color.Red );
m.Set( "g_flMetalness", 1.0f );
m.Set( "Color", Texture.White );
m.Set( "Roughness", Texture.White );
m.Set( "Normal", Texture.Transparent );
myModel.SetMaterialOverride( m );
}
There was no feedback from CodeGen anywhere before, you could be doing things completely wrong and it wouldn't tell you and you'd be left puzzled as to why your code wasn't working.

I've made it report diagnostics to the error list now and fail the build on errors.
I've focused a lot this month on resolving crashes and overall stability, we all do this every month anyway - but we know they've been frustrating so it's nice to share that we're seeing more and more crash free sessions.
I rewrote a big chunk of the backend to make it easier to tag and sort a list of packages. I should have done this a year ago, it's so much better now. 
As you can see we have categories and tags on gamemodes now. The decision here was to reduce the number of categories and promote the most popular tags into new facets/categories.
Here you can see that maps have a bunch of different facets to filter then down to something you're interested in. They have tags too!

What's kind of interesting to me is that we can process assets on the backend to facet them automatically. For example, take a look at this size category in the asset browser:
Because our backend is in c#, it's easy for us to plug in and use the awesome Source 2 Resource Viewer library to process the assets into buckets based on what they are.
I'm sure no-one cares about this but me, but Qt's default docking is fucking dog shit. It actually embarrases me every time I see it. I fixed what I could in engine, but it was still mostly shit.  

Well now that's fixed! You have no idea how happy this makes me.

Steam Audio is now running on the latest version (v4.1.2), this version fixes many of those pesky crashing bugs. We've now re-enabled Embree again so reverb & pathing should be running faster again.
I spent a good few days fixing the main grievances people had with VR which was making their games pretty unplayable.

Debug stereo rendering


I added a new debug option to the editor that enables stereo rendering, this will give you a general idea of if your content renders correctly for two eyes like it would in VR. It's not 1:1 the same as VR but it's pretty close.

VR Rendering fixes


Lots of rendering was only happening on the left eye, or flickering, or not rendering at all.
  • Fixed DebugOverlay only appearing in left eye and not working with depth
  • Fixed projected decal and cable shaders not instancing properly causing them to flicker or not appear at all
  • Fixed RenderHooks not working in VR properly, these are used for post processing effects like glow
There was also a bug with distant objects disappearing way too early, you could see this clearly in Table Tennis with the ball only needing to pass the other user to end up flickering and disappearing.

I fixed it by adjusting the unique culling frustum VR uses - it was rejecting scene objects under the assumption they were covering less than 0.25% of the screen.

vr input fixes


I've made the hand transforms and linear velocity get affected by VR.Scale, previously your camera would get scaled but your hands would remain where they were.

I also fixed the hand's angular velocity being wrongly affected by VR.Anchor.
I shaved between 0.3-5ms off frame time by optimizing how the UI is laid out. There's optimizations here still but that was really hurting us in some scenarios.
We started collecting anonymous performance data. I love stats and data. It takes the guessing out of everything. So these allow us to see things like startup times.
The stats are pretty noisy.. but they give us some good information. Here you can see the 0th, 25th, 50th, 75th and 99th percentiles. I'm flying blind on this shit but this seemed to be a better representation than the average.

From this we can see that some poor bastard is waiting 2 minutes for the game to startup, while some other lucky bastard is waiting 4 seconds. That's something we should try to address.

Then we have stats for compiling. How long is it taking people to compile at runtime?
Again, not too bad. Averaging around a second o two, but worst case can be over 30 seconds. How many files are people compiling per build, maybe that's why it's taking so long..
I suspect some people are obviously emptying libraries from github into their repository, which is accounting for 1000+ .cs files.  What's the split, what's the slow part of the compile?
The actual compile (emit) is the slowest by far, but our generators aren't far behind. Maybe we can optimize them. Actually reading the .cs files from disk is taking a decent chunk too. We could take a look at that, although I suspect 500ms is probably a good time for reading 1000+ .cs files from a presumably slow disk.

While these metrics are almost useless in isolation, they become important when you consider that we're crawling towards a release and should be making everything better. We should be able to review these charts every month and be able to see that we're improving, or at a minimum standing still.
Back again with another 2D game in the 3D engine. This one's a co-op roguelike using only the Razor UI system and emoji.

I've set the default asset browser in Hammer to the new one, you should be able to do everything you could in the legacy one and way more with cloud assets.
There's two new filters @selected and @inmap which do what you'd expect.
During hack week, I wanted to try creating C# shaders. The general idea of it is we take regular C# code and then transpile it to our shader format & compile that. It was a cool experiment but a lot of problems came up with it.

The good

The main reason I wanted to look into C# shaders is the IntelliSense support we'd get out of it. Since everything we transpiled had an equivalent C# type, we would get IntelliSense for free. This meant less time working out documentation and more time writing code that works:
The second great thing about it is that people don't need to worry about learning a different language. All your C# knowledge will transfer*. We'll get back to this later. A lot of people seem to struggle with the idea of HLSL and shaders as a whole. The hope was to allow this to be an entry point and understand the process much easier.

The Bad
C# has A lot of fancy features. Most of these features are not supported by HLSL. This requires us to do some extra heavy lifting to get this working. An example of this is struct initialization. With HLSL you can't easily define a set of default values on a struct. When you create a struct, you need to initialize all the values in this. In C#, we can do selective initialization. For example with a class:
public class Hello
{
public int FieldA { get; set; } = 2;
public int FieldB { get; set; } = 1;
public int FieldC { get; set; }
}

public class World
{
public Hello Create()
{
return new Hello() { FieldB = 0, };
}
}
in HLSL, we need to initialize FieldA & FieldB, this means we need to recursively handle default values for all types & create the object in place, as well as keep track of the actual underlying object variable name. This would end up being compiled as:
struct Hello
{
int FieldA;
int FieldB;
int FieldC;
}

Hello Create()
{
Hello temp_var_0;
temp_var_0.FieldA = 2;
temp_var_0.FieldB = 0;
temp_var_0.FieldC = 0;
return temp_var_0;
}
Since we can't make things 1:1 as in this instance above. We need to also introduce variable state tracking & variable management. We also need to translate everything into HLSL-compatible types. We need to know which variable has which type & which components can be accessed. A simple edge case here for example is dealing with Vector4s. We cannot directly convert a vector4 being initialized with a vector3. This isn't valid HLSL so we need to actually handle cases like this and pass our own extra arguments.
new Vector4( new Vector3( 1, 2, 3 ) ); // Becomes float4(float3(1,2,3));
new Vector4( new Vector3( 1, 2, 3 ), 0 ); // Becomes float4(float3(1,2, 3), 0 );
Finally, as we're transpiling, a lot of the C# code can be pretty verbose. Our shaders are already verbose, we would need to do a lot of simplification for it to be viable. Currently, it's looking very close to the HLSL side of things. What we can simplify is still TBD.
The Ugly
The generated code could look better but it's quite annoying to read. Shader inputs are also set up to be properties within the class. This introduces the issue of trying to shuffle around Vertex & Pixel inputs around between methods. The solution? Always pass the inputs/parameters to any method call. This can be further optimized by processing the function and only passing what's used, but that'd take a lot longer than a week. Luckily the HLSL compiler optimizes it out and flattens the code either way.

The Result
Now with all the boring stuff out of the way, here's an example of our post-processing pixelation shader recreate in C# as well as the actual generated output of it.

C#
VFX/HLSL
Maybe next hack week I can look at simplifying it & rewriting a large chunk of it. Perhaps I can fix some of the issues I have or perhaps I'll leave it at this. It was a fun experiment and learned a lot through the process!
Our particle system has an asset called snapshots, these allow you to define points and values that the particle system can read and use. These can now be created at runtime through code so you're not limited by creating the snapshot in a tool.


Last month we added addons, which you could choose before creating a server and would load with the server. This month we added runtime addons, which can be downloaded and spawned while the server is running. Code, models, materials, sounds and particles etc are all downloaded and loaded at runtime.

We had something like this in GMod a while back, it was called CloudBox. That was shortly before Workshop was invented I think, and we switched to that and lost this awesome functionality. One of my favourite things ever in gaming was jumping in game and spawning things that people have made and try to work out what they do.

There's a bit of work to do around this feature. Like what addons are people allowed to spawn on your server? Obviously not any addon - or they'd makes them an admin and spawn that. So it's gonna have to be something like a whitelist from us of "safe" addons, but server admins can spawn whatever they want. We'll figure that shit out.
Hah fun month. This month it's Christmas so everything is going to go to shit. We decided to lean into that with a couple of weeks of pain.

We've spent a while bloating the game and trying not to break anything. Over the next couple of weeks we're going to be ripping shit out and changing shit up. We're going to break everyone's addons. That's going to suck but I think we've been pussies about it up until now, it's time for some pain.

We've already got some pain queued up. To make the pain a bit more tolerable we have a rule where if someone breaks the api they have to fully document it on the wiki and fix all of our gamemodes/addons. That should give you all a fighting chance!

I don't know if there'll be a blog post over Christmas - so if not thanks for reading our boring devblogs and see you next year 💗💗