Sandbox Logo
01 September 2021

August 2021

Lots of VR stuff this month, but as usual we did lots of other stuff too
We added a tag system to entities. This allows you to tag each entity with an unlimited number of networked strings. Like classes in html/css, this is incredibly simple and incredibly powerful.
Here's the simplest example, tagging a player with the "player" tag. Now every system in the game will be able to test for that flag and know it's a player.

Generic checks like this are useful in s&box because our Pawn might just be a bare entity, it's not necessarily got a common Player base class we can test for.
But with this system it's easy to determine if that entity should be treated as a player, or an npc, or a grenade etc without checking for classes directly.

Permissions


The nice thing about this tag system is that it can be used for any purpose. I could imagine that we support things like "is-admin" and "can-noclip" as a simple permission system.

Collision Groups


We haven't really gotten anywhere with making collision groups useful and not suck yet, so this is a foot in the door to that problem. Maybe we could collide based on tags?
An example of this  is above. You can filter your traces by tag, which to me seems a lot simpler to understand than fighting with with collision groups.

Future


I think of things like setting up triggers in Hammer, and it has options for whether it should trigger on npcs and players etc. We can change that now so you can just put tags in to require and tags in to ignore. That'll simplify it while also making it more configurable.

This is a feature I'm hoping to find more uses for in the future. 
I've got a wiki bot that looks at the s&box dlls and adds documentation pages for all of the public classes etc.

This bot was pretty basic and we only had a few pages filled out, so I went in this month and made some improvements and got a bunch more done.

The funny thing about documenting an API is that you end up seeing a bunch of bullshit in the design that you wouldn't otherwise see. I think this is a really good reason why the developers themselves should be documenting stuff like this, because they're more likely to fix the code rather than document it anyway.

This stuff still isn't perfect, like it's missing information indicating that a class has a base class, which I think would be useful. Attributes create a bunch of junk. And really I think the design could be more intuitive and useful.
The citizens are misunderstood. Everyone has only seen them in their primitive naked form for so long now that they can't imagine them any other way.

This month I took some steps to rectify that, by working on hair and some newer clothes for them to wear.

Feedback has been amazing but I'm hoping to improve on it (and get some better facial hair) over the next few months.
The previous shader we had for blending only supported 2-way blends and wasn't powerful at all, it caused a lot of trouble for ourselves and the community to idealize the vision wanted for our worlds. This month I wrote a new one that is also being using as a base for the future user-created shader workflow.

Blendable should support up to 5-way blends ( Limited to 4 due to GPU register pressure at the moment ), so you should have no shortage of possibilities to create.

This should have the same workflow as standard PBR shaders and its shading is consistent with them too, so people should be able to just use it with a familiar workflow.

Additionally, more than just being able to blend through textures, you can use blendable to tint your geometry in any way you want for free.
Imagine you're shooting a gun in game. The client runs the prediction shoot code, which calls a "ShootEffects" function, which plays the sound and draws the muzzle flash. Then the server runs the actual code which calls 'ShootEffects' too, which is an RPC, so it sends a network message to the client to call the function.

We need the server to call that RPC on all of the clients so they hear the sound and see the effects. But the predicting client is getting that message twice. So they play the sound twice.

The Old Way


So I had a system in place where the predicting client would assume it had already run any RPCs sent during its command. So when it came in it'd ignore it and not run it.

That worked but sometimes the server sent an RPC that it didn't want the client to suppress. Like if you shoot someone, and they die - they only die on the server, and it happens during the command so the predicting client would ignore any RPCs sent.

So I added Prediction.Off(), which would turn suppression off in its scope. Except people obviously were confused by it because they put it everywhere, even in client only code.

The New Way


If we can make things easier and simpler we should. This was turning out to be a big enough confusion that it was worth fixing.

So when the client calls an RPC function client-side now they'll make a note of it. 

When they get an RPC from the server they'll supress it if they already ran it during the prediction.

This is simpler and automatic. It has the advantage too that if the parameters of the call aren't the same then the function is run twice - alerting the developer that their prediction code is wrong.
I added support for hand tracking. 

You get the transform of each hand in an easy to access structure. You basically have a position and rotation of each hand. What could be simpler than that?
These are part of the input, so they're available both clientside and serverside in Simulate().
You can also access the inputs on the controllers. In an attempt to simplify this stuff I'm keeping it quite generic based on the controllers that are mainstream right now. By this I'm mainly meaning the Quest and Index controllers. 
We might run into a situation down the line where controllers have different inputs, or they  get rid of controllers all together and have just hand tracking. I'm betting against things changing significantly from an input point of view though
You can get the finger curl and splay via Input now, which allows you to pose the fingers to match your real life fingers in game.

This allows you to give hand gestures such as waving and thumbs up. Any rude hand gestures are detected via machine learning causing the game to automatically be closed and an email is sent to your parents.
These are part of the input, so they're available both clientside and serverside in Simulate().
I added extra tracker support for VR. This allows you to get the position and rotation of any extra puck trackers you might have, and also your base stations if you're not using a quest.
This API isn't finalized. I imagine we need a way to assign trackers to certain roles, so you can tell it which one is on your foot etc. I don't know if there's going to be a lot of call for this stuff.. but it's there. 

These are part of the input, so they're available both clientside and serverside in Simulate().
As mentioned in previous blogs we have a unique opportunity right now to just get rid of the legacy baggage the Source Engine comes with - namely all the base entities that the engine is stuck with since the Gold Source days. They contain a lot of features that are simply obsolete with the newer versions of the  engine. So this is what I have been working on lately.

It starts the the 4 ways you could create doors in Source - func_door, func_door_rotating for 'brush' based doors you make entirely in Hammer, and prop_door_rotating and prop_dynamic for model based doors.

In S&box we just have one door entity - ent_door. It can handle both dedicated models and 'brushes', it can work as both rotating door and a linearly moving one. It can be animated via an Animation Graph if rotating and linear movement is not enough. 

The same philosophy has been applied to a few other entities so far:
  • func_button and func_rot_button were merged into ent_button
  • func_rotating and func_movelinear were merged into ent_platform
  • func_tracktrain, func_train, path_node and related entities used in creation of elevator and train-like objects are superseded by ent_path_platform
If you have ideas or criticisms of this approach to base entities, please do not hesitate to let us know. This is the perfect time to let them known while things are still in flux.
I have continued to work on environment props, to flesh out the streets of construct. This month has included some icons of the British high street, the red post box and the black and gold bin. Along with various items of rubbish, that can be thrown out when the bins are knocked over.
Being able to directly play animation sequences on models was disabled, but now it works again! This means that you don't need to go through an Animgraph to animate a model. You can do it old-school style.

The RTS experiment that Conna is working on has switched back to this method due to a performance bottleneck in Animgraph that was revealed through the vast number of units being used.

You can still do some animation compositing right in ModelDoc, although it's a bit more limited. Here's a wiki page about it!

You can also look at the RTS repository to see how the code works.
With the major overhaul of Hammer in Source 2 it is also quite simple to create custom helpers/visualization for entities. 

For example, here we have a helper showing the opened position for the door. This is something I know a lot of mappers have struggled with in the past, so should cut out a lot of iteration.
Our intention is to eventually allow coders to add their own Helpers and Tools to Hammer - but that's a while away.
I'm exploring uses for the new Path Tool in Hammer. It's what the new ent_path_platform entity uses. I feel like its underused and somewhat under developed in base Source 2, but with a bit of work we can turn that around.

With the Path Tool there is no need to set unique entity names for each node to link them together, so creating and modifying a path is a lot simpler.

The path is then compiled into a single entity that you can name and assign to a different entity like the new ent_path_platform.

The path tool data is already accessible to coders via the BasePath class.

In the near future I want each node to have custom key-values and outputs just like we can already do with entities themselves for even higher customizability.
We had fake world UI previously. It would always be drawn on top of the world, so was unsuitable for some situations.

With UI in the world they are actually being drawn in the world, in the right transparent draw order. So they draw behind things and in front of things like you'd expect. Great for name tags etc.
To serve us properly the world panels need to be interactive in the same way as regular UI with the mouse.

We need to be able to click and drag whether we're using a mouse, mouselook or VR.

I started exploring that this month. It meant a bit of a reshuffle of the internals to get it working, but it's in a place now where it's usable and working.
I'm going to try something different for next month's blog. I'm going to write it before we do any of the work. Then we're going to try to do everything I wrote in the blog. Pretty much the Amazon working backwards thing. It should be a fun experiment, we'll see how it goes.

Now we've got a lot of the bigger problems solved I'm spending time thinking about the learning curve, how we can explain how the engine works better. Part of this process is writing a how to document and seeing how many steps we can remove from it. 

Big thanks to the guys in #vr that helped me test and design the VR API this month, especially ShadowBrain who went above and beyond for me. Really, the whole community is being great to us right now, reporting bugs and requesting features - so thanks to everyone for getting involved! 😻😻🥰