I had an idea about detecting object collisions in 3D worlds while walking my daughter in her pram. I'm certain people already use this idea in the real world, but I can't be bothered researching it to find out who does. The idea stems from experience I've had in dodgy little game apps (like an assessment piece I did in second year university) where you need to detect object collisions (for example, to make pool balls bounce,) and the simplest way to do it is by testing the relative locations of all the objects at a discrete point in time (like, each frame.)

This works well for the most part; you quickly work out how to find two overlapping objects and apply forces to each of them in the exact opposite directions from each other. Then you discover the "sticky objects" phenomenon, where two objects will overlap each other too much, and when you test their locations again next frame they're still overlapping.. then they either get stuck together forever, or spontaneously shoot away from each other at an appreciable fraction of the speed of light.

So then you work out that when you find to overlapping objects, the first thing you do, before applying the force or moving on to the next frame, is move them apart so they are no longer overlapping. And *then* comes the hyperspace phenomenon: when objects move really fast they go right through each other. At that point, discrete location testing fails.

The solution, obviously, is using continuous (not discrete) location testing. It's so obvious. It's surprising anyone would even consider doing it any other way. Right? The only problem with using continuous location testing is this: how? I have a theoretical solution which I've not turned into code, and can't be bothered turning into code, but if I did I'm sure it would work fine.

First: instead of picking a point in time and saying "the object is *here*," you need to pick two points in time and say "the object moves from *here* to *there*." If you're willing to assume that an object moves in straight lines (at least over short periods,) you can easily model the volume of space occupied by the object over the period by drawing lines from *here* to *there*. The space inside the lines contained the object at some point between *then* and *now*.

Second: find collisions between the volumes the way you'd find them between any objects. Keep track of the overlapping volumes.

Third: when you have collisions, for each object involved: work out when it first entered and last exited the overlapping volume. If no two objects were actually in there at the same time, it's not a real collision. Otherwise, calculate the positions and velocities of the objects *now* as if they'd bounced off each other at the time of the collision. And there you have it.

Obviously there're plenty more details to include, like multiple object collisions, finding the *first* collision, finding subsequent collisions, etc. But those are just details. I've done the hard work already. Now someone should go implement it. Preferably in the same game that uses the AI I proposed in an earlier post.

On reflection, you could probably simplify it too, and only do the volume projections for fast-moving objects (like bullets). I think calculating a long bullet (one that occupies all the space the bullet would occupy over a period of time) would be sufficient for an FPS, unless you want hyperspace bullets that sometimes go through things without leaving a mark.

By the way, I'd like to add to my last post in the part about perspective projections, where I gave `{ x, y, z } → { x / (z + 1), y / (z + 1) }`

; that forumla, especially the `1`

, was actually derived from a more complicated calculation that added a new variable, the *distance to screen*. The *distance to screen* represents the distance from the "viewer" (the focal point) to the "screen" (the plane onto which the image is projected.) That is just another way of saying the field of view, or the arc distance of the rendered scene. The image to right illustrates how increasing the *distance to screen* shrinks the field of view (the red viewer is about 2.8 times the blue viewer's distance from the screen, and the width of the visible area is approximately 60%) If you really want to know, the field of view can be calculated as `2 × atan(screen width / (2 × distance to screen))`

. I chose `1`

because I felt like it. For a 2-pixel wide scene at one unit per pixel that gives a FOV of 90°; for 1600px it gives 179.86°. GLX lets you specify the field of view, and from that and the projection settings calculates the distance to screen for you. So there you go.

... Matty /<

- Previous: Simple Rasterisation ()
- Next: Monty Hall ()
- Index

- Published
- Modified
- License
- CC BY-SA 4.0
- Tags
- development, graphics, physics, software