Matty K http://matthew.kerwin.net.au/blog/ Matty K's Blog en-us Sun, 11 Feb 2024 15:09:07 +1100" Sat, 27 Jul 2024 16:51:48 +1000 http://blogs.law.harvard.edu/tech/rss Matty's Dodgy RSS Generator (1.0) matthew@kerwin.net.au matthew@kerwin.net.au Abstract Plans (2024-02-11) https://matthew.kerwin.net.au/blog/?a=20240211_abstract_plans Abstract Plans

I am broken.

I can never remember plans. Not unless they're written down on a calendar or something.

I want to try to explain it. To get it into words. Because the act of turning thoughts into words usually helps me understand things.

In my mind a lot of future events are .. theoretical? I don't think that's the right word, but I don't know what the right word would actually be. If you say "I'm going to a show on Saturday" or even "Do you want to do something next week?" I can engage in that conversation, but it happens in a kind of abstract sense.

  1. Yes, you're going to a show, that sounds great. You could wear this and that. I hope you have fun. (End of story)
  2. I would love to do something next week. I'm free on Tuesdays. (End of story)

It doesn't ever slot into the.. the bigger picture? Again I can't find the right words. It doesn't switch from abstract to concrete in my mind until something happens to make it "real". Like, writing it on a calendar.

And when it's abstract, my mind doesn't hold onto it. I might recall it if reminded, but often it will have completely gone and I won't know what you're talking about if you bring it up again.

And then things happen. And I'm surprised by them.

  1. Oh, you can't hang out because you're at a show? (Rejection sensitivity activates)
  2. Oh, you want to do something on Tuesday? I'm not physically or emotionally prepared for that. (Change aversion activates)

It's too easy for conversations to take place in a part of my brain that is isolated. Maybe compartmentalised is a relevant word? I need a specific event or ritual to push it over into the "real world", to give it actual consequence and to fit it in with things that are actually happening. And I struggle with that.

]]>
Sun, 11 Feb 2024 15:03:21 +1100 7e538f14d4372eb9ca953982959f8af1
A Perfect Circle? (2024-01-29) https://matthew.kerwin.net.au/blog/?a=20240129_a_perfect_circle A Perfect Circle?

My approach to knowledge is that it is impossible to know anything for certain. Which means it's okay with me if it's good enough.

Imperfect Knowledge

To know something fully you have to possess all the information about that thing. It would not be enough to be able to record every particle that makes it up, you would also have to record all the properties of those particles: their types, masses, locations, momentary velocities, charges, spins, etc. The space required to write down all that information would necessarily be larger than the thing itself.

A human brain has a finite number of neurons, which make a finite – huge, but finite – number of synaptic connections, and so has a finite amount of "space" to store information. And because it has a finite amount of space, it can never have full knowledge of itself. There isn't enough room to store all the information about all of its own neurons and synapses inside those same neurons and synapses. So it certainly cannot have full knowledge about the rest of the person it's inside. Or the world around it. Or the universe.

I accept this limitation, because it allows me to relinquish the pursuit of perfect knowledge. I am freed, instead, to use approximations, to say that some particular knowledge is "good enough." But good enough for what?

Well, I'm a person, I live my life at "person" scale – things that matter to me from moment to moment are usually somewhere between micrometres and thousands of kilometres, between micrograms and tonnes, between milliseconds and decades. Much bigger than electrons, much smaller than galaxies. So if I'm faced with a problem in that range, I need to be able to use my imperfect knowledge to predict or generate an outcome that is also within that range.

Say you throw a ball, I don't need to know exactly how many protons and neutrons and electrons make up the ball, or their individual locations or kinetic energy or anything like that. I can approximate, and say it's "about 5¼ ounces" and travelling "about 45 miles per hour", and work out where to put my hand to catch it.

That approximation is called a "model", since it's – literally – a simple model that represents an incomprehensibly complex thing. Models are really important for letting my finite and very limited brain try to interact with the almost inifinite complexity of the world around me in any meaningful way.

And that approximation – considering a couple of gajillion particles as a single object with a single mass and a single velocity – works well enough for us to play catch, but it also works well enough for us to use it as the basis to make predictions about other events. What if we throw the ball harder? What if we throw a heavy rock instead of a ball? Based on what we observe we can write a formula, then we can plug in numbers like the weight of an object and the speed it's thrown, and use that to calculate where that object will land. Then if we do it a bunch of times, with a bunch of objects and a bunch of speeds, we can build confidence in the formula, and confidence that our measurements are in fact "good enough."

The formula is also called a "model", since it's a simple model of an even more incomprehensibly complex set of interactions and events. No computer today would be able to track all the particles involved, or calculate the squajillions of interactions between them, but I could work out a simple formula in my head (or with pen and paper, at least). It may be imperfect, but "good enough" is so much more useful than "couldn't work it out in time" ...or at all.

However, even if we test it a hundred times, and get it right all hundred of those, that isn't proof – what we've done is build confidence. It can be very high confidence, supreme confidence even. We can be practically certain that we can use our formula to predict where any object of any mass at any speed would end up, but we have to remember that there are limits.

What about the wind? If a ball is light enough, or the wind is strong enough, we'll have to add new variables to our formula, and take new measurements each time. But say we do that, and we test the new formula a thousand times under various conditions, and it gets it right all thousand of those – that's still not proof. What other variables have we missed? Does the temperature matter? Or the elevation above sea level? And what about if the ball were the size of a galaxy? Or a neutron? Can we be sure that the formula still works reliably? Or that we're even able to take accurate measurements at those scales?

We also have to remember that the ball isn't just a ball – it's a bajillion particles with their own masses and velocities and behaviours. We aren't measuring and feeding all of them individually into our formula. And if we were, how do we know there aren't other interactions between the particles that we're ignoring? Like, what makes them stay together in roughly the same shape? Our formula doesn't say anything about that. It's still a good formula, we're really really confident that we can use it to make useful predictions, but it's not perfect.

Making Predictions

The process of making a formula is itself a whole thing. Usually it starts with observing something in the real world, a physical event, that we want to be able to predict or influence in a predictable way. A ball we want to catch maybe, or an apple falling from a tree.

Then comes the hard part: coming up with a narrative explanation, a story, that fits our observations. "A body in motion or at rest will remain so unless acted on by an outside force." "Two objects are attracted to each other with a force that is proportional to their combined mass and inversely proportional to the square of their distance." That sort of thing.

And then, the fun part: turning that narrative into a formula. Working out how to take measurements that we can plug into that formula. Using the calculations to make predictions about real world events. And using the outcomes to either build confidence in the formula, or to come up with refinements and improvements, or to scrap it altogether and start over.

There is an entire field dedicated to the process of coming up with formulae and then coming up with specific ways of testing them – identifying variables that we can control under particular circumstances, performing the actions, comparing the results with our predictions, and documenting the whole thing rigorously so other people can also set up their own tests and verify it for themselves. This field is called "science", and the process is the "scientific method."

Eventually, after various iterations and trials and revisions we might come up with a formula that we're really confident about. We might get to learn the limits of where it's applicable and where it doesn't work so well. And we might have shared it with other people who have also independently run their own tests and fed back into the process too. It's still not perfect – there's no proof, the narrative and its formula remains a theory – but it's a really good one, and we know where and how to use it.

A Big One

In the late 1600s a guy in England came up with some of these narratives and formulae; in fact I paraphrased two of them earlier in this piece. The second one, about objects being attracted to each other, was called his "law of universal gravitation." It might have been called a "law", but today we often refer to it as a "theory" – in the sense that it, like everything else I'm talking about, isn't perfect. It's one of those "good enough" formulae – one that we have, collectively as a society, been using and verifying and building confidence in, for over four hundred years now. We've found some limitations, the extreme situations where the (relatively) simple formula doesn't lead to reliable predictions – but for the vast majority of cases it's shown itself to be incredibly reliable, and we as a society have used it to do some pretty incredible things.

While I'm talking about that English guy – his name was Isaac Newton, by the way – I should probably point out something that he himself said about this "law", and which has held true ever since. Neither the formula, nor the narrative it represents, ever explained why it happens, it only helped describe how. We can predict where a thrown ball will land, but we still can't explain why it falls. And that's fine, the formula is still useful – as long as we can make the predictions.

There are implications from these models, too. If a ball and the Earth are both attracted to each other, then why doesn't the Earth move up? Partly we can make sense of this by incorporating the first narrative/model I mentioned earlier – that things at rest want to stay at rest. (This is one of Netwon's "laws of motion", incidentally.) Further, their obstinance is proportional to their mass. In other words, if you want to make a very big world move enough that you can detect it, you need to apply a very big force. A small ball, however, doesn't take much force at all before it starts moving enough that we can see it and measure it. So the ball drops obviously, but if the world moves, it's so little that we couldn't see it anyway.

Another implication comes from the fact that it is, after all, a model. The Earth isn't a single object, it's – as far as we can estimate – about 130,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 (that's 1.3×1050) atoms, each of which is made up of a couple of dozen electrons and protons and neutrons, and those protons and neutrons are made up of smaller particles (quarks and such) ... Each of those particles feels the attractive force from every other particle.

Aside: In fact, they feel it from every particle in the entire universe; however, as the distance increases the force lessens by the distance squared. So a when particle on the moon might be 30 times as far away as a particle on the far side of the Earth, the gravitational force would be 1/900 as strong. A particle on the sun might be about 11500 times as far away, so the attraction would 1/130,000,000 as much. At a certain distance you can say the force is close enough to zero that, for most intents and purposes, it can be ignored.

To reduce all of those particles to a single object, to make the maths reasonable, it turns out it's good enough to take the approximate average of all of their locations, and declare that point as the "centre of mass." Anything that is attracted to all of those particles is essentially attracted to that location. Another way to look at it would be to calculate all the attractive forces to all those particles (as vectors – strength and direction) and take their average. That singular "average force" points towards the centre of mass, and gives a single value for how strongly an object is attracted to that point.

That means that the ball is (or more accurately, all the particles in the ball are) pulled towards the Earth's centre of mass. And the particles in the Earth are pulled towards the ball, too. Those closest to the ball's centre of mass might even fly upwards – at half the distance, the attraction is four times as strong; a hundred times closer, ten thousand times stronger – and because they are much lighter than the ball, they might move much, much faster than the ball. But they aren't strongly attached to the rest of the world. The Earth is kind of flexible, a little bit gooey – especially at the surface – and constantly in motion. So part of the Earth might probably be pulled up towards the ball, but for any Earth particles further away from the ball a good fart nearby would probably affect their movement more than gravitational attraction to the ball.

What is up? (Baby, don't hurt me)

Most of the time, for most of the history of humanity, the overwhelming force that affects our day to day existence is gravitational attraction to the Earth. It's the force we fight against when we stand up, the one that pulls us down if we lose balance and fall. So our bodies contain a pretty good gravity detector, which we use to keep our balance and not fall over. Except that it's not specifically a gravity detector, it's not even really a force detector, it's actually an accelerometer. It works for us, and we can explain it using yet another of Newton's formulae: Force = mass × acceleration

Neither the mass of the Earth nor the mass of a person change very much from moment to moment, so in terms of that formula we can say the mass is constant. That means it's not a variable, so it can be written out of the formula. I.e. when mass is constant, Force = acceleration

And since gravity is the dominant force we feel most of the time, our formula can be reduced down to: gravity = acceleration

We even have a number for it: 32 ft/s2

So most of the time, an accelerometer works as a pretty reliable gravity detector.

And because our accelerometer/gravity detector is built into us, it's natural that we have a word for what we sense with it. In English that word is "down." When we're standing on the surface of the Earth, the average force of all the particles of the Earth pulling on us registers in our accelerometer and we can sense which way is down. And, like the ball from before, down is towards the centre of mass.

That's cool, because it doesn't matter where you are, "down" means "towards the middle." Every person, every ball, every molecule of dirt or water or air, everything is pulled towards the centre of mass. And as the particles and objects move, the centre of mass also moves a little, but there's always a centre of mass. Always an average.

When everything gets pulled towards a single point there's a natural tendency for them to form into a ball. Particles closer to the point block those further out, and the further ones move around trying to find a spot closer to the point; and layer by layer, piece by piece, they settle into the low spots, filling it out and forming into a sphere. There are other factors, of course – as with everything I've written so far, this is a "good enough" approximation – for example there are other forces, like the forces that make some atoms cling to each other to form crystals. The particles in a crystal don't flow so freely; but if there are enough particles in total (for example, the number of particles in the Earth) it doesn't matter much if they form into crystals or rocks, there's still plenty of fluidity to flow around and make a roughly spherical shape.

Fun fact: if you shrunk the Earth down to the size of a pool ball it would be a more perfect sphere than an actual pool ball, and would be almost as smooth (it'd probably have a texture like fine grit sandpaper.) [citation provided] Mountains are huge, but the Earth is so much bigger it's hard to comprehend.

We find the limits of using an accelerometer as a gravity detector when other forces get involved, though. For example, if you get spun around by the arms fast enough you experience a "centrifugal force" (which is a whole other "good enough" that I'm not going to go into) which can be even stronger than the gravitational attraction towards the Earth. By which I mean: when you get spun around, your legs fly out to the side instead of hanging straight down. Your sense of "down" shifts, to line up with the average force/acceleration you are experiencing.

It also breaks when you jump off something high. The accelerometer works by resisting the forces; if you're free to accelerate along with them (i.e. fall) it registers as close to zero, and you get a weird floating feeling.

As a society we know this. We have learned to use the word "down" to mean the direction you would feel if you were standing still on stable ground. This gives us a reliable frame of reference when communicating with other people. And we have other related words, like "up" – which means the opposite direction to "down." Your individual sense of "up" might change sometimes, but we have come to understand that the sky is up, and the soil is down. And most of the time that's definitely "good enough."

However if you want to do something more extreme, like travel really far, or go really fast, or fly into space, you need a more precise way to talk about the shape of the world and the directions of the forces you'll be experiencing. "The sky is up" doesn't cut it if you consider that there's sky around all of the Earth – in fact, most of the sky is "down" compared to your own local sense of which way up is, since more of the atmosphere is below the horizon and hovering around the far side of the planet than there is above you. So we can instead say "down is towards the average gravitional force you are experiencing" and "up is the opposite direction"; but even that stops being useful if, say, you're far enough out in space that the distances from all the other particles in the universe means the gravitational force you're experiencing gets close enough to zero to not matter. Or if you're constantly falling, and all your accelerometers depend on you not falling for them to work properly.

Fortunately, for most of us, this will never be a problem. So our understanding of up and down is usually good enough.

Footnote: 150 g, 70 km/h, 9.8 m/s2 ... I had a particular audience in mind when writing

]]>
Mon, 29 Jan 2024 18:16:57 +1100 8d11b7bb04c71f977d676db927231954
Communication (2021-10-11) https://matthew.kerwin.net.au/blog/?a=20211011_communication Communication

Communication has two parts: the medium, and the matter.

The medium is about the people. There's a speaker who provides information, and a listener who receives it. What counts as information in the minds of the speaker and the listener – the structure, the raw data, whatever – don't have to match (it's not just copy and pasting a file), but both people have to be able to use the information, to have understanding and be able to make predicitions based on it. The more compatible the speaker and listener's mental models, the less work is required to break the information down to a common denominator.

The third component, after the two mental models, is a shared language. The more similar the mental models, the more specific (and, hopefully, efficient) the language can be. We use things like education and training and mentorship to build common mental models and to learn specialised languages for information sharing.

The matter is about whatever it is the information describes; the object or system or concept that is being understood. To be able to communicate, both the speaker and the listener have to be able to hold that information in their own mental models. The more accurate the mental model, the more useful the information it holds. And generally speaking, a model's accuracy is based on detail and specificity – more specific: more accurate. So a thorough understanding of the topic is critical for successful communication.

Software development is all about communication. Someone has a problem they have to communicate to the developers (bug tickets, problem statements, requirements docs); developers build a solution that they have to communicate back (manuals and training); developers also have to communicate with each other (design docs, code comments, API descriptions). You might even say that developers have to communicate with the computer, specifying data and algorithms in a model compatible with the computer's storage and processing systems. At every point there is some information that has to be communicated, which means having understanding and using shared languages and models.

In cases of interactive communication the burden of actually transforming the information between mental models and languages can be negotiated and adjusted (often implicitly); however in offline communication (i.e. anything static, like documentation and code comments) the burden falls entirely on the speaker. They can make contextual assumptions about the listener's mental models and languages (consider a theoretical programmer reading your comments vs an end user reading your manual) however it's entirely the speaker's responsibility to transform the information from their mental model into that shared language at an appropriate level.

The ability to communicate with people whose mental models are different from your own is a specialist skill, that's why we have distinct titles for business analysts and technical writers and trainers. We shouldn't necessarily expect developers to write manuals.

]]>
Mon, 11 Oct 2021 16:47:36 +1100 2c7bf0e8b6c130fe45334411985f559f
[QUT] Working From Home (2021-08-16) https://matthew.kerwin.net.au/blog/?a=20210816_working_from_home Working From Home

After the recent lockdown and ongoing tightened restrictions, we received this in an email from our CIO:

This past week has been rather quiet in the office with most of you working from home. Let's hope it won’t be long before we get back to normal. If you are returning to campus please ensure you wear your mask at all times and adhere to the social distancing guidelines.

I take several issues with this.

The Old Normal

Why are people in senior positions so eager to "get back to normal"? That's rhetorical, of course; I know why – "normal" is what got them where they are, so they have positive associations with it; and: most people in senior positions are older, and you know what they say about older people and resistance to change. But the old way wasn't necessarily the best way, let alone the only way. In fact, the old way is what lead us to where we are. Why not take the opportunity to do something new? Maybe even better?

Accountability

I'm on campus today because we are required (by our Associate Director*) to work from the office at least two days a week. My role doesn't require any face-to-face contact at all, neither with clients nor colleagues – my actual work is entirely on computers, which function just as well when I'm at home; and the occasional meeting on Zoom is acceptable – so there's no reason I can think of to require me to be on campus. And I'm sure the majority of my colleagues know their own positions and responsibilities well enough to make the call about how much time they need to spend physically present too. So why are we required to be here two days a week? The implication I draw is that management doesn't trust us to Do The Right Thing™. Not very empowering, nor great for morale.

At the same time as being told to work from the office, we're told to set up a roster so that we're not in at the same time as people around us (for distancing reasons), and reminded that we're required to wear a mask at all times, including in the office – far more restrictive than in the past. Which means they know it's not actually safe to be here. Coercing someone to be in an unsafe environment when you know it's not safe and when you know it's not required has to at least count as negligence, surely, if not wilful.

And on top of that, "please ensure you ... adhere to the ... guidelines" is an abrogation of responsibility. By telling us that we are personally, individually responsible for our own safety, despite being coerced into a known unsafe environment, is not just an attempt for management and/or the business to shirk responsibility for our safety, but a conscious one. Someone actively decided to cover their arse in this way. It's unconscionable.

And to tie it all together, the implication that management doesn't trust us, juxtaposed with them declaring us to be individually responsible, has to mean some combination of these two things: management actively wishes us harm; and/or management doesn't have a consistent purpose and vision†. Whichever way you put it together, not the people you really want to be working for.

  1. * the AD reports directly to the CIO
  2. † that's business jargon for "don't know what's going on, nor what to do about it". C.f. left hand doesn't know what the right hand is doing
]]>
Mon, 16 Aug 2021 12:45:16 +1000 a0f9466ce63da47d5da5574c3211c11e
Unfiltered Thoughts (2020-05-28) https://matthew.kerwin.net.au/blog/?a=20200528_unfiltered_thoughts Unfiltered Thoughts

Some little thoughts and observations that have been cluttering up my mind:

Identity

I feel uncomfortable with terms like "gay" or "straight" because they bind our identities to other peoples'. I get how labels (as symbols) can help build community, and that's valuable. If identifying as "gay" helps someone connect with people, who they can share experiences with, and feel safe, and strong, then that's unironically great and should be supported. And I get the value of a coarse-grained shorthand for, like, "people you might dig, who might dig you" vs all the combinations that aren't that.

But, using myself as an example because that's one I know fairly well: I'm straight. I'm attracted to women. Not all women, but of all the people I'm attracted to (I assume) they're all women. However: I'm assuming, and for the majority of cases it's vague physical attraction to someone I don't know; so I'm assuming based on their appearance and presentation, and my social and cultural conditioning, and whatever. But some of them could be men, or non-binary, or I don't know. If were physically and emotionally attracted to a trans man, would that change my personal identity? Would it be insensitive or wrong to call myself straight in that situation? Does a ratio of one-to-one-billion make it "ok"?

The concept of gender is bad enough, but tying identity not just to our own genders, but other people's .. it feels off.

The Canned Acknowledgment Phrase

“We acknowledge the traditional custodians of the land on which we live and work and pay our respects to the Elders past, present and future.”

It's always something to that effect. I'm ok with the wavering around “traditional custodians” or “First Nations owners” or whatever, because that's hard to get right. And Elders, lores, customs, etc. – I don't know what to include and what's appropriate to leave out. But there are two things that get me:

  1. “We ... pay our respects...” – no you don't, you say that you pay your respects. It's like saying "I apologize" instead of "I'm sorry". It's not D&D; you're not narrating the world. At least admit that “we ... wish/intend/hope to pay our respects...”
  2. “...the land on which we live...”. The land isn't just the dirt. We're not above the land. And that combined with the fact that it's written with 'proper' grammar makes it feel like a bunch of political legalese – linguistic acrobatics to get around saying something human and real and meaningful. If it said “...the land where we live...” I'd feel a lot better about it.

Streaming Live

These are unprecedented times. :kappa: The ways we interact are different. We don't have the same access to our usual work setups, our offices, our colleagues; routines are out. Everything is online.

In this context the ubiquity of streaming services and high speed internet are great. Streaming has a lower barrier to entry than usual video production; there's a lot less lead-time and production required to get content out there. And it's more interactive, and personal, which helps with engagement and building community. And immediate feedback helps content creators steer towards what works for them and their audience on the spot, without investing all that production effort ahead of time into what may end up being a wrong direction. It's good.

However... I'm in Australia, and most of the people who I follow are in the UK. When they stream at three in the afternoon that's midnight here (currently; it's 1am at Christmas time). My only option is to watch the recordings (the "VODs") at a later time. So I miss out on the interactive, personal part of the stream. Except that it's still there – a large part of stream VODs is the streamer engaging with the live audience – so I have to observe these personal interactions that I can't take part in.

For me, the value I've gotten from the content in the past was increased by the production effort. Instead of replacing that added value, for me it's just taken away.

But I'm just one person. Most of the audience is in timezones that work better for live streams, so for most of the audience and for the content creators it works really well. I'll stop complaining.

]]>
Thu, 28 May 2020 12:57:12 +1000 45f6b27f29e9d60f2f9dcb5e8e5bf383
Why Ruby Doesn't Have a Boolean Class (2018-11-26) https://matthew.kerwin.net.au/blog/?a=20181126_why_doesnt_ruby_have_boolean_class Why Ruby Doesn’t Have a Boolean Class

Why doesn’t Ruby have a Boolean class?

This is a fairly common question. Here is my answer.

Note: this is my answer, not necessarily anyone else’s. It is based on my experiences, and my understanding of things. Corrections are welcome.

First I must talk about types. And to talk about types I must talk about different languages and their type systems, and about data.

Types in C

A “value” in C essentially has two pieces of metadata: the location of some data, and the type of that data. As a gross generalisation: any piece of data in C is a sequence of bits, which said another way is just “an integer”. A given integer can be interpreted in various ways:

  • as an actual integer, with a given number of digits
  • as a memory address (at which may be stored more integers)
  • as packed data (e.g. an IEEE 754 floating point number)
  • etc.

The interpretation determines (or is determined by) which operations can be performed on the value.

The type is the piece of metadata attached to the value that describes which interpretation is correct for that particular value at that particular time.

When you write an operation in C, the compiler generates instructions to perform that operation; but the choice of instructions depends on the types of the values that are the parameters/operands and any outputs. For example, consider the instructions that could be generated to implement this simple arithmetic operation: x + 4

typeof xasmbytecode
int32_tadd eax, 483 c0 04
uint8_tadd al, 404 04
floatmov eax, 0x1234
mov dword [eax], 4
fiadd word [eax]
b8 34 12 00 00
66 c7 00 04 00
de 00

As a C programmer your job is to declare the right types for your values, and let the compiler turn your high-level operations into low-level instructions.

Casting a value to a different type lets you tell the compiler to interpret the value’s data differently, and thus generate different instructions. Casting is itself a special type of operation in that it usually only has an effect at compile-time, but some casting operations can result in run-time instructions.

Types in Java

A “value” in Java basically has two pieces of metadata, too, which at their core align with those in C: a location and a type. However in Java the data isn't just a sequence of bits – it's an object.

Generally speaking, an object is both: the location(s) of some data, and the set of operations that can be applied to or performed on that data. In Java the operations take the form of methods (which are indistinguishable from functions to all but the most hardcore of computer science nerds.)

The relationship between data and operations is maintained through the “class” hierarchy. A class is a bunch of methods, maybe some static data, and references to “ancestor” classes, which in turn have their own methods and ancestors, etc. Every object has, alongside its concrete data, a reference to its class.

When a method is invoked on a value/object, the compiler uses the value’s type metadata and underlying object's class reference to find the nearest ancestor that defines the method in question, and generates instructions to invoke that particular method with the object's data.

To facilitate this process, Java is incredibly anally retentive about ensuring that every value’s type corresponds with its object’s class, or a superclass thereof.

This example from Stack Overflow is really nice, so I’ve changed it:

interface Domestic {}
class Animal {}
class Dog extends Animal implements Domestic {}
class Cat extends Animal implements Domestic {}

Animal pet = new Dog();

pet instanceof Domestic // true - Dog implements Domestic
pet instanceof Animal   // true - Dog extends Animal
pet instanceof Dog      // true - Dog is Dog
pet instanceof Object   // true - Object is the superclass of all

The type of the variable pet is Animal, and the actual object’s class is Dog, which has Domestic, Animal, and Object as superclasses.

The type and class are intrinsically linked, and introspective operators like instanceof, as well as Java’s strict compile-time type enforcing, raise this duality to the fore.

As a Java programmer your job is to define your class’s methods, and invoke methods on values based on their types.

Types in Ruby

Like Java, Ruby is an object-oriented language; however the type system works a bit differently from C or Java. For the purposes of symmetry let’s say that a “value” in Ruby has a location and a type; however for the purposes of illustration let’s say that all values in Ruby have the same type, i.e. they are all “Ruby objects”.

Because the type is always the same there is no need for a compiler to inspect that metadata and work out which instructions to generate to invoke a method. Instead all method invocations use the same instructions, which means they take effect at run-time; following the object’s class reference, traversing the ancestry hierarchy, and dispatching the method.

In turn this allows Ruby programs to redefine methods and classes and ancestry on the fly. Ruby is a dynamic language.

This means that the class of an object in a Ruby program is not its type, in the usual sense; rather it is just a collection of functionality (methods) that apply to it. Common method implementations go into a common ancestor class.

Further along this train of thought, Ruby espouses a paradigm known as “duck typing” – which is to say that an object that waddles like a duck and quacks like a duck is, in fact, a duck. To compare it to our Java example above, the Dog class wouldn't have to extend the Animal class to be considered an animal, it would only have to implement the methods that are common to animals – eat, breathe, sleep, etc. Ditto the Domestic interface.

And so to the question: Why doesn’t Ruby have a Boolean class?

The true and false objects in Ruby don't share any common method implementations. In fact, they’re the least likely pairing of objects to do so, since they are exact opposites. They may share similar interfaces (the set of method names, parameter lists, semantics, etc.) but the implementations will always be different.

If Dog and Fish don't share any common implementation there's no need for an Animal class, even if they can both eat and breathe. They do them all differently.

And that is why Ruby does not have a Boolean class.

]]>
Mon, 26 Nov 2018 16:47:28 +1100 b6c430f2c22076d8b0dc546f5d20bc82
[QUT] Ticket Lifecycle (2018-01-19) https://matthew.kerwin.net.au/blog/?a=20180119_ticket_lifecycle [QUT] Ticket Lifecycle

This is a conceptual model of a ticket's lifecycle, which could be useful when building workflows or generating reports. It looks like a waterfall, but it's different. Honest. 🌊

Three (non-terminal) phases: development, QA, release.

Each split into two stages (inactive/active), with two states (unassigned/assigned). The active-unassigned state doesn't make sense, so it's not included.

  • Reported – the backlog; distinct from "ready" because the team lead hasn't moved it into the currently-active scope yet
  • Development
    • Ready – any developer can pick up these items
    • Queued – assigned to a developer, not currently in progress
    • In progress – analysis, development, underpinning Service Requests, etc.
  • QA
    • ready for QA – any tester can pick up these items
    • queued for QA – assigned to a tester, not currently in progress
    • QA – undergoing testing/QA; includes code review, user acceptance testing, etc.
  • Release
    • ready for release – any operator can pick up these items
    • queued for release – assigned to an operator, not currently in progress
    • release – being released; includes the whole SCMG/CAB process
  • Closed

Some Observations

Ideally, any time a ticket moves to a different phase it should be de-assigned, so it can be picked up by a member of the team responsible for that phase. Realistically, it's usually the same person the whole way through, so it's okay to go straight from active to queued.

We often see the entire QA or Release phases wrapped up in single mega-stages, but I think that's a symptom of immature practices and we should question it whenever we see it.

There is no distinction in these diagrams between "new" and "abandoned" tickets – it doesn't matter how much work has been done on a ticket, just the nature of any work that remains.

I don't know which transition is equivalent to Jira's "resolve" – it can make sense to set a resolution at "done" and/or at "close".

Edit: 2018-01-30

There are three ways to get work from someone else:

  • (internal) assign your job to the other person's queue (or just unassign it and bump it back to the "ready" state.)
  • (external) request something (information/confirmation/upstream code changes/etc.) from someone, but retain ownership of the job yourself. This could include creating a sub-task.

In case 2 you need a way to indicate that the task is part of your current workload, but it can't progress until some third party completes some work. To that end, I've added a "waiting" state to each of the three main phases.

Edit: 2020-10-30

There is value in distinguishing the "waiting" states between "waiting for a process that intrinsically takes time/has a known delay/etc." and "waiting for something that requires action to progress". Internally we refer to these as "waiting" and "blocked", respectively. They aren't reflected in the model above, but could be represented by a "needs attention"-type flag on each of the "waiting" states.

]]>
Fri, 19 Jan 2018 17:09:00 +1100 d9d6967f4a8eb88be3f0522e0b3dd0c5
[QUT] Operations Development (2017-06-21) https://matthew.kerwin.net.au/blog/?a=20170621_operations_development Operations Development

The Setup

Imagine this scenario:

You are a person with a job. Your job involves doing technical things on complex IT systems. Part of your work involves using other complex IT systems (e.g. email, job ticketers, wikis, etc.).

Now imagine this:

The complex IT systems you have to use to do your job have the odd bug, or require the occasional enhancement.

I know, it's a radical hypothetical, but it could happen.

And to complete the scenario:

There are teams of people in your organisation with jobs that involve doing technical things on the complex IT systems you have to use to do your job.

Terminology

Because I can tell this is going to get really wordy if I keep going like this, here are some fictional names which bear no resemblance to any real people or systems:

  • "WARMTH" – a complex IT system you have to use to do your job
  • "the WARMTH team" – the team of people with jobs that involve doing technical things on WARMTH

Endarkenment

In an unenlightened corporate environment the WARMTH system would be entirely configured, developed, and managed by the WARMTH team. The WARMTH code, maybe even documentation, would be restricted so that only members of the WARMTH team could access (let alone modify) it. Any bug or enhancement you identify would have to be assigned to the WARMTH team, then the WARMTH team would have to:

  1. gather requirements / identify the actual issue
  2. evaluate / approve / prioritise it (or reject it as not in line with the future vision for the service underpinned by the WARMTH system)
  3. identify and develop a fix/implementation
  4. QA the fix/implementation (code review, integration testing, etc.)
  5. deploy the fix/implementation
  6. confirm that it's resolved (including checking that you're satisfied)

That's a lot for the WARMTH team to do.

Enlightenment

Enlightenment comes when you realise that – while the WARMTH team members are all "people with jobs that involve doing technical things on complex IT systems" – you, too, are "a person with a job that involves doing technical things on complex IT systems."

Your particular and specific expertise may not be in dealing with the complexities and idiosyncrasies of WARMTH, but as an end user you should have a good idea of how it works (and share the future vision for the service it underpins, but that's another story), and as a technical person you should have a certain generalised ability to identify and resolve simple technical issues (particularly issues of content and configuration, if not code).

Models

In the technical industry we have a pretty well-developed model:

  1. work is done in an isolated "development" space;
  2. candidate changes are reviewed, then merged into a clean "working" space; then
  3. the set of merged changes undergoes final QA, before being released into the "production" space.

In unenlightened corporate environments all three phases are handled within the team. However in open source communities the first phase can be handled by anyone (team members, invested stakeholders, or random altruists) – reducing some of the development burden on the team.

An enlightened corporate environment would borrow from the open source community: accept that all of its technical employees, across the entire organisation, are worthy and capable – and open up the first phase of the model to all of them.

The Payoff

Now, imagine this scenario:

You are at work. You identify an issue with WARMTH. You log in to an isolated DEV-WARMTH environment and makes some tweaks to the configuration that you think resolve your issue. Then, you submit your changeset to the WARMTH team along with a description of the issue it resolves.

At this point the WARMTH team would:

  1. evaluate / approve the issue (or reject it as not in line with the future vision for the service underpinned by the WARMTH system)
  2. QA the submitted changeset (check whether it resolves the issue, introduces regressions, meets coding standards, etc.)
  3. deploy the changeset

That's a lot less for the WARMTH team to do.

Of course, it won't always work. Sometimes an issue is too deeply entrenched, and requires to much specific knowledge of WARMTH and its complexities and idiosyncracies to be resolved. Sometimes the enhancement you propose doesn't mesh well with someone's vision of how WARMTH should work. Sometimes you just don't have the time or inclination to fix something for some other team.

And other times it might be difficult. Perhaps the change you make can't be communicated easily (the config isn't in version control, or isn't in a format that can be easily diff'ed, etc.). Perhaps your change doesn't resolve the issue or introduces regressions, so the WARMTH team still has to do all the identification/development work. Perhaps a swarm of bees rushes out of the air conditioning and engulfs you every time you log in to DEV-WARMTH (obviously the WARMTH team are specially trained to deal with this).

However the improvements to the service (some bugs are fixed in esse before they're even identified by the WARMTH team), workload (less pressure on the WARMTH team), buy-in (you feel like you have some input into WARMTH, and so become a little more invested), and knowledge sharing (something something silos) are self-evidently worth it.

Update [2017-06-22]:

I know the bees are a well-documented known risk, but not everything is like that. Here is another potential pothole on the path to paradise, to ponder, and possibly preempt:

Isolation isn't easy

  • Some systems don't lend themselves to having multiple development spaces (because they're a pain to set up, or they're prohibitively large, or there are licensing issues, etc.)
  • If there's a single DEV-WARMTH environment, your fiddling can interfere with other people's. (Including the WARMTH team's, which is arguably a higher priority than yours.)

In theory a good DevOps / automated deployment strategy can get around most of these issues (not licensing, alas), but an unenlightened corporate environment is not likely to have implemented a good DevOps / automated deployment strategy.

Perhaps that is the first step to enlightenment. ]]>
Wed, 21 Jun 2017 15:36:00 +1000 874490c9b058ae1dda8530bc2407d47a
[QUT] Dynamic Object Orientation (2017-04-11) https://matthew.kerwin.net.au/blog/?a=20170411_dynamic_object_orientation Dynamic Object Orientation

MOO-Code

Back in the olden days (1995-2000) I used to play in a programming language called MOO-Code. [1] "MOO" stands for "MUD, Object Oriented", so it's pretty safe to assume some amount of object-orientation is involved. MOO-Code is not a pure OO language – it contains several data types (integers, real numbers, strings, objects, errors, and lists) and a few built-in functions – however its treatment of objects is interesting: the database stores only objects, their "attributes" (special data), "properties" (general data), and "verbs" (methods). There are no classes – verbs are part of their object – instead each object has the attribute ".parent", which is a reference to its parent object. (A "duck" may be a child of the "generic duck", but that generic duck is still a real object you can interact with in the MOO.) When a verb is called that isn't defined on the object in question, or the built-in function pass() is called inside a verb, it walks the .parent hierarchy until it finds an object that does define the verb, and calls that (the special variable this is initialised in the verb as a reference to the original object, allowing for proper polymorphism).

This is a form of prototype-based programming – delegation.

Javascript

Javascript also uses the delegation flavour of prototype-based programming, based on the .prototype property. Not much more to say, really, except that the object that is used as a prototype in Javascript is rarely used as a first-class object, which can more often be the case in a MOO. (You never really touch Array.prototype except to modify the functionality of all Array objects.)

Ruby

Nobody would call Ruby a prototype-based language; however, when you peek under the hood it's easy to start seeing similarities. Ruby is quite strongly object-oriented, as very few values are not first class objects (methods can be captured as objects, kind of, but blocks can only be encapsulated in Proc or Lambda objects) – significantly, classes are themselves objects. Each object has a special reference to its class (and that, in turn, to its ancestors), which defines the methods that can be invoked on the object. It also supports multiple inheritance by allowing lightweight classes (modules) to be "mixed in" to an object or class's ancestor hierarchy. The runtime lookup of a method in Ruby is a lot like the delegation used by MOO-Code and Javascript, which allows for dynamic modification of class hierarchies and methods.

The difference is that the methods are defined in the class, not on the object. This is true even true of "singleton methods" – Ruby objects each have their own special class (called the "singleton class"), which only they implement, that holds methods defined "on" that object.

o = Object.new
def o.foo
  puts "FOO!"
end
o.foo                                                 # prints 'FOO!'
p( o.singleton_class )                                # prints '#<Class:#<Object:0x000000015729d8>>'
p( o.singleton_class.instance_methods.include? :foo ) # prints 'true'
]]>
Tue, 11 Apr 2017 17:24:00 +1000 b9362b21d12d1d05da7329d263041e18
[QUT] On Network Interfaces and Sockets and MySQL Users (2017-03-24) https://matthew.kerwin.net.au/blog/?a=20170324_network_interfaces_and_mysql On Network Interfaces and Sockets and MySQL Users

Network Interfaces

A computer usually has one or more network interfaces. Traditionally these correspond with actual physical network interface cards (NICs,) but there are also virtual interfaces (for virtual networks) and sub-interfaces. Each interface has an address for each protocol it speaks. For example:

$ ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:27:9f:5b 
          inet addr:131.181.125.21  Bcast:131.181.125.255  Mask:255.255.255.0
          inet6 addr: fe80::7bf0:89d2:82d6:52b5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:498614 errors:0 dropped:0 overruns:0 frame:0
          TX packets:164551 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:178470125 (178.4 MB)  TX bytes:20115698 (20.1 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:04:d1:46 
          inet addr:192.168.56.101  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::77ca:11b1:4693:6c4e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5545 errors:0 dropped:0 overruns:0 frame:0
          TX packets:976 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:599732 (599.7 KB)  TX bytes:189542 (189.5 KB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:630 errors:0 dropped:0 overruns:0 frame:0
          TX packets:630 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:66588 (66.5 KB)  TX bytes:66588 (66.5 KB)

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 
          inet addr:172.28.19.180  P-t-P:172.28.19.180  Mask:255.255.255.255
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1300  Metric:1
          RX packets:70344 errors:0 dropped:0 overruns:0 frame:0
          TX packets:66166 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:19308394 (19.3 MB)  TX bytes:4501547 (4.5 MB)

I have two physical (called eth0 and eth1) and two virtual (lo – loopback – and tun0 – my VPN tunnel) interfaces. The physical interfaces have Ethernet addresses ("HWaddr"), all of them have IPv4 addresses ("inet addr"), and all but the tunnel have IPv6 addresses ("inet6 addr"). Any IPv4 packet that comes out of an interface has the interface's IPv4 address in its "source address" header field. I.e. when you connect through an interface, you use that interface's address.

Sub-interfaces are a logical way to assign multiple addresses to a single interface. For example, on eprints01:

eth1      Link encap:Ethernet  HWaddr 00:50:56:81:4A:F9 
          inet addr:131.181.108.175  Bcast:131.181.108.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3061071845 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3088297840 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2051440318461 (1.8 TiB)  TX bytes:1906068417248 (1.7 TiB)

eth1:1    Link encap:Ethernet  HWaddr 00:50:56:81:4A:F9 
          inet addr:131.181.108.87  Bcast:131.181.108.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1:2    Link encap:Ethernet  HWaddr 00:50:56:81:4A:F9 
          inet addr:131.181.108.95  Bcast:131.181.108.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

Sockets

In typical network programming the bind(2) function is used to bind a newly-created socket to a network interface, thus giving all communications over that socket the interface's local address. It's possible to pass a "null" interface parameter to bind(), in which case the operating system chooses which interface/address to use.

MySQL Users

MySQL users are identified by a combination of the user-supplied username and the IP address of the client. (E.g. 'matty'@'131.181.125.21') Sometimes it's written as a DNS name (e.g. 'matty'@'some-computer.library.qut.edu.au') but the MySQL server is clever enough to cross-reference DNS names and IP addresses as required.

To connect to a MySQL server, therefore, you need to be able to control which interface your socket is bound to. In some cases there is so much abstraction involved there's no way for the developer to instruct the MySQL client to bind to a particular interface. Presumably, in that case, it defaults to "null" (i.e. operating system's choice.)

Which is stupid, because that's a really important variable to be able to control, as it's part of the connecting user's identity.

Route Tables

On Linux you can use the command ip route to view and update your IP routing tables. I'm given to believe that this can be used to force network communications out over a particular interface. Here's the route tables on eprints01:

$ ip route show
131.181.108.0/24 dev eth1  proto kernel  scope link  src 131.181.108.175
131.181.186.0/24 dev eth0  proto kernel  scope link  src 131.181.186.58
131.181.185.0/24 dev eth2  proto kernel  scope link  src 131.181.185.49
169.254.0.0/16 dev eth2  scope link
default via 131.181.186.1 dev eth0

For some reason it's configured to send all connections to 169.254.* out on eth2, but most other outward connections should – by default – go out on eth0. So if you set a up a route for your MySQL server you should be able to control the user's address. Sadly, there doesn't seem to be a way to add routes that use sub-interfaces.

]]>
Fri, 24 Mar 2017 17:59:00 +1100 21e0a5b0fc57a512c0d101bc2f971c62
[QUT] SELinux (2017-03-17) https://matthew.kerwin.net.au/blog/?a=20170317_selinux SELinux

SELinux is another layer of security complexity that sits below regular GNU user/group/other permissions.

Here are some links:

Terminology

Context

A context is an n-tuple of:

  • user
  • role
  • type
  • range (optional)

Usually denoted as:

user:role:type
user:role:type:range

By convention: users end with _u, roles end with _r, and types end with _t.

e.g.:

$ ls -Z ~/.bashrc
-rw-------. matty default unconfined_u:object_r:user_home_t:s0 /home/matty/.bashrc
$ ps -Z
LABEL                             PID TTY          TIME CMD
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 53168 pts/0 00:00:00 bash
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 54566 pts/0 00:00:00 ps

Range

Written as: sensitivity:categories

Sensitivity:

  • single level: s0
  • range of levels: s0-s0

Categories:

  • specified individually: c0,c5,c10
  • treated as ordered set: c0.c10 (= c0,c1,c2,...,c9,c10)

I don't know what any of this actually means.

Class

Object classes (file, dir, lnk_file, etc.) and the set of permissions that can be configured on each.

See: https://selinuxproject.org/page/ObjectClassesPerms

Policy

/etc/selinux/<policy_name> – by default we seem to always use a policy called "targeted"

A policy has a bunch of 'modules' – on my RHEL7 server these can be found in /etc/selinux/targeted/active/modules/ (inside sub-directories for each priority?)

Policies are made of rules, e.g.:

allow user_t user_home_t:file { create read write unlink };

According to this rule the user_t type is allowed to create, read, write, and unlink files that have the user_home_t type.

The process of creating or updating these is interesting.

Example Policy Creation

If a particular action is failing, for example your Apache httpd process is having trouble writing files under /var/www/ you can auto-generate a policy to fix it, using Magic™:

$ grep httpd_t /var/log/audit/audit.log | audit2allow -M foobar
$ cat foobar.te

audit2allow generates a .te file (the source) and a .pp file (the compiled policy.) In this case the .te file includes this line:

#!!!! This avc can be allowed using the boolean 'httpd_unified'

..which tells me I could use the semanage boolean command (below) to fix it without generating a new policy. If I decided to edit the .te file by hand I could recompile it:

$ checkmodule -M -m -o foobar.mod foobar.te
$ semodule_package -o foobar.pp -m foobar.mod

A compiled .pp file can be loaded into your SELinux system:

$ semodule -i foobar.pp

Remember: "foobar" is a Bad Name™ for a module. Do not use it.

Commands

  • chcon – change a file's context
  • restorecon – resets it
  • runcon – executes a command with a specified context
  • semanage
    • semanage login -l – map of GNU user ↔ SELinux user
    • semanage user -l – list of SELinux users and their roles
    • semanage fcontext -l – list of managed file contexts (see also: restorecon)
    • semanage boolean -l – list of individual functions that can be functioned
    • semanage export – shows the semanage commands needed to get back to your current configuration
  • seinfo – query components of a policy
    • seinfo -u – list of users
    • seinfo -r – list of roles
    • seinfo -t – list of types
    • seinfo -t -x – list of types with their attributes
]]>
Fri, 17 Mar 2017 16:48:00 +1100 d23cc8e032060c884a826344b3535d4a
[QUT] HTTP/2 (2017-03-10) https://matthew.kerwin.net.au/blog/?a=20170310_http2 HTTP/2

HTTP/2 is "...a replacement for how HTTP is expressed “on the wire.”" [http2.github.io] It was invented to improve performance for the web, based on Google's experimental SPDY protocol (which it has since replaced.)

Among its stated goals were the requirements that it use the same protocol semantics (request-response exchanges, headers, status codes, etc.) and traverse the same networks (gateways, proxies, etc.) as HTTP/1.x.

Problems with HTTP/1.x

Head-of-line blocking

Because of HTTP's request-response, request-response flow, any subsequent exchange cannot progress until the preceding one has completed. This is called "head-of-line (HoL) blocking," and is a problem if:

  • the client doesn't (or can't) stack requests, thus waiting for a whole request-response round-trip before starting a new one (see also: pipelining)
  • big/slow/unimportant resources are requested before small/fast/important ones, so it appears nothing useful is happening for a while

TCP connection overhead

In the olden days, each request-response exchange happened on its own TCP connection (open(), write("GET ..."), read(...), close()). Unfortunately that open() is a fairly costly operation, especially if latency is involved, and then even once it's completed you find yourself throttled by TCP congestion control [slow start]. (It's even worse if HTTPS/TLS is involved, because the initial handshakes and key exchanges and whatnot involved there can be very slow.) For high-churn servers you also end up with a lot of TCP ports bound up in TIME_WAIT. This can be partially overcome with Keep-Alive and persistent connections or by sharding; however:

  • HoL blocking leads folk to use multiple simultaneous connections (domain sharding is the server-driven equivalent of this) resulting in multiple simultaneous slow starts (i.e. the average speed is slower over all)
  • for historical reasons HTTP is geared towards tearing down connections periodically (Keep-Alive timeouts, httpd's MaxKeepAliveRequests and – to a lesser extent – MaxRequestsPerChild settings, etc.)
  • sharding breaks caches (explained in more detail below)

HTTP overhead

Sometimes (and not uncommonly):

GET /devcon/http2-rfc7540/ HTTP/1.1
Host: xfiles.library.qut.edu.au
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36
DNT: 1
Referer: https://xfiles.library.qut.edu.au/devcon/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Cookie: com.silverpop.iMAWebCookie=12345678-abcde-1234-abcd-123456781234; III_EXPT_FILE=aa1122;
  III_SESSION_ID=1aca388da6270e7153258a4ea06cf7be; SESSION_LANGUAGE=eng; ezproxy=xxxxxaaaaa55555;
  _saml_idp_for_int=aHR0cHM6Ly9lc29lLXRzdC5xdXQuZWR1LmF1; _saml_idp="aHR0cHM6Ly9lc29lLnF1dC5lZHUuYXU=";
  __utmz=255014920.1432528569.4.3.utmccn=(referral)|utmcsr=xxx.qut.edu.au|utmcct=/idp/profile/SAML2/POST/SSO|utmcmd=referral;
  spepSession=319954b846de6f70ad1bff65ea9b85d23f037d68-d5f57d6e97b139fbaf0952803ec36fea5e4000c1-1432598840;
  __utma=255014920.1032807906.1432269288.1432598569.1432602655.6; __utmc=255014920; _ga=GA1.3.1037755534.1431397557
If-None-Match: "98c66e-f0c-8e1315c0"
If-Modified-Since: Tue, 26 May 2015 04:28:15 GMT
  

Total: 1200 bytes (>1k)
Required: 72 bytes

This request is repeated, almost byte-for-byte, for every image, stylesheet, font, etc. in the page.

Cache busting

The ideal cost for a HTTP request-response exchange is 0 bytes delivered in 0 seconds. Sounds crazy, but this can be achieved – using caching.

To get around all of the problems listed above it's common to see two practices:

  • domain sharding – duplicating resources across multiple domains and distributing requests between them (thus increasing the total number of parallel connections)
  • inlining/spriting – combining multiple resources into a single überresource (often accompanied by instructions for separating them out again) to reduce the amount of TCP and HTTP overhead per resource

However:

  • a resource made available on multiple shards has to be cached once for each shard, which means (worst case) it has to be requested n+1 times (where n is the number of shards) in order to get one perfect 0 byte, 0 second request.
  • any change to any sub-resource of an überresource means that the entire überresource is invalidated in all caches and needs to be refreshed – which is much more costly than updating a single sub-resource. (Especially if that sub-resource isn't actually required at the moment.)

Solutions with HTTP/2

Multiplexed streams

HTTP/2 supports parallel multiplexed request-response exchanges on a single connection. This means that HoL blocking is eliminated, and resources can be requested and delivered as soon as they're known to be needed.

Persistent connections

HTTP/2 works over a single, long-lived TCP connection. This means the costs of open() and TCP slow start are amortised over the lifetime of the connection.

Binary format & header compression

HTTP/2 uses a binary packing format (where HTTP/1.x used Good Old ASCII™) and a header compression mechanism that reduces the number of bytes sent over the wire quite a bit. (It was particularly designed with stupid headers, like User-Agent and Cookie, in mind.)

Caching works

Because these solutions eliminate the drivers for things like sharding and spriting, caches now work the way they were intended*.

* HTTPS/TLS/MitM/etc. notwithstanding – but that's a topic for another time.

Other benefits of HTTP/2

HTTP/2 also introduces some other goodies:

  • explicit stream priorities – allowing clients to add a preference/weighting to requests, as a hint to the server that some resources (e.g. render-blocking CSS) should be delivered before others (e.g. asynchronous javascript)
  • reset (aka "stop") – allowing either end of a connection to cancel an in-flight request-response exchange, so you don't have to clog the tubes waiting for an unwanted resource to be fully delivered
  • server push (aka "cache push") – allowing a server to send a resource to a client without the client asking for it first (useful for updating stale cached resources)
  • flow control – allowing either end to limit the amount of data it receives from its peer, so you don't have to worry about buffer overflow or (some classes of) DoS attacks

Side-effects of HTTP/2

As I've written elsewhere, all of these changes do mean that while HTTP/2 was meant to be a drop-in replacement for HTTP/1.1’s transport, realistically we have to rethink how our applications are structured and redesign them to take advantage of what HTTP/2 has to offer. And it's not really practical to try and offer the exact same service over both protocols (c.f.: Happy Eyeballs) unless the application is explicitly programmed that way.

Why to use HTTP/2

HTTP/2 was designed for web browsing. It's useful for:

  • web apps – a single, long-lived connection, with potentially lots of repeated metadata and resources
  • multiple tabs – ...to a single server can share a connection (taking advantage of better network usage and compression), and use prioritisation to "optimise the user experience" between the tabs
  • servers in general – ...will benefit from better network usage (fewer connections) and hopefully less bandwidth usage

However it is computationally more complex (and therefore slower to execute) than HTTP/1.x, although that's usually more than made up for in other efficiencies – being CPU bound is almost always better than being IO bound.

Finally, unless you can control both the client and the server and can assert a level of surety over every intermediate network device, there's no way to use HTTP/2 over cleartext HTTP. All the major browser vendors decided to only support HTTP/2 over HTTPS†. This adds additional costs to running HTTP/2 (the cost of certificates, administrative overheads, additional computation, etc.) If you're already using HTTPS for everything, as we often are, then it's not a problem; however if you're running sites in cleartext HTTP the path to upgrading can be quite costly.

† For reasons. The technical reason is complex:

  • If HTTP/2 is meant to be an upgrade for HTTP, then it should still work with the same URLs – and same URLs means same default ports. Which means for 99.999% of URLs on the open web we would have to carry HTTP/2 over ports :80 and :443.
  • Some proxies assume all data flowing through TCP port :80 (or any port labeled "HTTP" in their config) is HTTP/1.x, and those proxies can die in horrible and unpredictable (and sometimes undetectable or undiagnosable) ways if they instead get a stream of apparent binary guff. In most cases those devices have been convinced, over the course of the past two decades, to expect and allow a stream of binary guff on port :443 (or any port labeled "HTTPS") so "smuggling" HTTP/2 inside a :443 TLS stream has much more chance of success.

The non-technical reason is simpler: Google wants HTTPS everywhere.

Update [2017-03-14]

Regarding Problems with HTTP/1.x

There are also lower-level workarounds to help fix things like TCP connection establishment overhead, such as TCP Fast Open (and I think there's a 0-rtt TLS hack as well, but I don't know much about that.) These workarounds don't help with slow start, though, or any of the other issues listed above.

Regarding Why to use HTTP/2

Another point I forgot to bring up is that HTTP/2 over TLS (i.e. the de facto standard) requires the client and server to negotiate the HTTP version inside the TLS protocol. This requires the use of the ALPN TLS extension.

ALPN is supported by modern TLS libraries, and HTTP/2 is supported by modern servers, however official support is limited. For example, the versions of Apache httpd available from Red Hat don't have mod_h[ttp]2 compiled in, so you'd have to build your own httpd from source (or use an unsupported repository) to be able to use HTTP/2 with Apache.

]]>
Fri, 10 Mar 2017 18:39:00 +1100 a894a982e61dbb009aeaf422ac58ab39
[QUT] Content-Security-Policy (2017-03-02) https://matthew.kerwin.net.au/blog/?a=20170302_content_security_policy Content-Security-Policy

Content Security Policy (CSP) is a HTTP header that tells complying browsers not to load or execute some things in a webpage. The presence of a well-formed, restrictive CSP header is taken as an indication of good site trustworthiness, according to some arbitrary metrics.

These resources provide a top-down view of CSP:

  1. Content Security Policy (CSP) [MDN] – a high-level summary, with some examples
  2. Content-Security-Policy [MDN] – a technical description of the CSP HTTP response header
  3. Content Security Policy Reference [content-security-policy.com] – a living quick reference guide of values, browser support, example configs, etc.
  4. Content Security Policy Level 2 [W3C] – the W3C work-in-progress draft for CSP v2

And also:

CSP is mostly useful if you don't trust the content of your pages; for example: if you operate a site where the content is created by unreliable clients (like a wiki or blog), or you include user-generated content (like comments.) Or if you want to get a high rank on the Mozilla observatory. My feeling is that its applicability depends on your site fitting one of two use-cases:

Fully Self-Contained Sites

(or sites with a well-defined and restricted set of external dependencies)

If you are running a site that contains all its own images, styles, and scripts – or you have a very specific set of external dependencies – it is easy to set up a site-wide CSP. In this case a blanket policy like default-src 'self' can usually apply.

Sites with Dynamically Generated Content

(which can tune the response headers specifically for each page/resource)

If you have the opportunity to construct response headers individually – for example, if your pages are all dynamically generated – it can be (relatively) straight-forward to generate a CSP tailored to each page. This still requires a high level site-wide policy to be in place, but the fine details can be tuned; for example, only allowing externally-linked resources on pages that are known to require them.

Dynamically-generated resources also introduce the possibility of using nonces to whitelist certain approved resources (under CSP v2.)

In any case, take heed of this sentence in the introductions of the W3 specs: "There is often a non-trivial amount of work required to apply CSP to an existing web application." It is not a turn-key or plug-in option. To help site maintainers migrate to CSP there exists the Content-Security-Policy-Report-Only header, which is exactly the same as the CSP header except that violations will not be blocked by the browser, and will instead be reported. Any deployed CSP should include a report-uri directive, both to help detect misconfigurations and to help diagnose actual attacks.

Footnotes:

  • A bit of presumptuous idealism from Mozilla: "...CSP is mandatory for all new websites..." [1]

Update [2017-03-03]

Some updated links

This introduces another cost for supporting CSP: it's changing so frequently that old versions don't even have a chance to become standardised before the next version is implemented in browsers and released. Watch for back-compat issues if you're planning on using CSP.

]]>
Thu, 02 Mar 2017 14:58:00 +1100 e2cd908ebc762f475ff91620646eb182
RFC 8089: The "file" URI Scheme (2017-02-21) https://matthew.kerwin.net.au/blog/?a=20170221_rfc8089_file_uri_scheme RFC 8089: The "file" URI Scheme

You might not know this, but I've been working on a thing. Well, finally and after many years' work it has been published as an RFC.

So, of course, I've thought of a bunch of things that I wish I'd added, or done differently.

A big one is that I wish I'd thought to split it into two files: the normative standards track spec that defines the scheme, and an informative document covering all the non-standard stuff in Appendix E—contentious things people do (and in many cases have done for decades) that could never be included in the main standard for political reasons but you probably need to be able to deal with if you want to interact on the open internet anyway.

I would totally use that as the title.

The reason for two files is that the core spec, being very stable, is probably not going to change much; but in contrast the informative bit, which documents the crazy stuff people do on the wacky internet, is liable to drift and warp and change over time. If we wanted to update the second part we'd have to re-release the entire document.

And now some politics: how do you justify pushing out a document that updates or obsoletes a standards track spec but doesn't actually change the spec? It's much easier to replace an informational memo.

I also wish I'd been able to find a way to better address Windows' quirks and UNC strings. Some of the non-normative appendix content used to be in the main spec, but somebody on the mailing list complained that I was giving too much attention to "Windoze" (presumably because 2017 will be the year of Linux on the desktop?) As a result, all the dumb quirks about dealing with drive letters and resolving relative references and ".." segments and all that, and how many slashes to put after "file:", were relegated to an appendix – and, I regret to say, in some cases completely forgotten about.

And so a lot of text that would have removed edge cases and resolved historical quirky behaviour—and made "file:" URIs really widely interoperable—is not actually standardised. I mean, it's written there, and sometimes I even tried to say "you probably really want to do this", but someone didn't like Windows so I couldn't make it really real.

I guess I could just write it in my blog. Yeah, that sounds cool. Here you go, an officially unofficial guide to using "file:" URIs by the guy who wrote the spec:

An Officially Unofficial Guide to Using "file:" URIs by the Guy Who Wrote the Spec

  • file:/foo/bar.baz and file:c:/foo/bar.baz are perfectly legitimate, unambiguous, and beautiful.
  • ... and file:/c:/foo/bar.baz is fine, too, if you prefer that aesthetic.
  • ... and file:///foo/bar.baz and file:///c:/foo/bar.baz have been working absolutely perfectly for decades, if you don't want to rock the boat.
  • file://c:/foo/bar.baz – and particularly file://c|/foo/bar.baz – are just... no. Don't do that. This isn't 1997. We have standards.
  • While we're there: don't use \. Ain't nobody got time for that.
  • file:////example.org/Qux/foo/bar.baz is obviously pointing to this file on an SMB share: \\example.org\Qux\foo\bar.baz
  • ... and file://///example.org/Qux/foo/bar.baz is acceptable, if a bit... y'know... slashy.
  • ... and if you don't speak SMB, no one is forcing you to implement it. Just recognise that that's what the link means.
  • If you're in Windows and you're in a HTML document at file:///d:/foo/bar/baz.htm and you see a reference like <img src="/foo/bar/pong.png"> you know it should resolve to file:///d:/foo/bar/pong.png – even if your CD is in C:\ somewhere.
  • ... and you know that <a href="/f:/oof/rab/zab.htm"> resolves to file:///f:/oof/rab/zab.htm
  • ... and anyone writing <a href="/a:foo/bar.baz"> or <link rel="/e:../bar.baz"> is not trying to interoperate – they're looking for exploits. Don't fall for it.
  • Anything you write between file:// and the next / is confused and broken and there'll always be someone who gets it wrong, so just don't write anything in there.
  • This reference <a href="/%E3%81%A1"> may mean many things to many people. (/ち in UTF-8, /πüí in CP-437, /TA~ in EBCDIC, etc.) Just avoid the whole mess – use an IRI.
  • ... and if you want a counter-example, this UTF-8 IRI: file:c:/reçu.txt always means exactly that, even if it gets turned into 0043 003a 005c 0072 0065 00e7 0075 002e 0074 0078 0074 in NTFS's UTF-16 encoding, or 43 3a 5c 72 65 87 75 2e 74 78 74 in MS-DOS's CP-437.
  • This reference: <a href="~matty/.plan"> doesn't mean what it does in bash, and you know it doesn't.
  • ... same with $HOME and %SystemRoot% and all that sort of guff.

Abide by these guidelines and, while not necessarily adhering to the strictest interpretation of a Standards Track RFC, at the least you'll be a well-intentioned and interoperable member of the internet community.

]]>
Tue, 21 Feb 2017 16:17:45 +1100 d59b30cbded5f72f571a22543ac4196a
Using HTTP/2 (2016-11-10) https://matthew.kerwin.net.au/blog/?a=20161110_using_http2 Using HTTP/2

HTTP/2 changes the way HTTP trafic flows over the web.

It changes how TCP connections are established and maintained, how requests and responses are correlated, and how metadata and payload bytes are encapsulated.

One of the driving ideals was that the semantics of HTTP (which have remained essentially unmodified for more than 25 years) would not change. Requests and responses would work the same way and carry the same information; headers and status codes would keep the same meanings; etc.

Ideally this would mean the HTTP/2 specification would update and/or replace the HTTP/1.1 message syntax and routing specification (RFC 7230) but not have any impact on semantics and content (RFC 7231, 7232, 7233, 7234, 7235, etc.) From a web application’s point of view it should make no difference whether a particular request-response transaction is transported over HTTP/1.1 or HTTP/2.

However, as with all things, there are exceptions.

Inherent Changes

By changing network behaviour, HTTP/2 changes the way HTTP messages should be formulated and delivered.

HTTP/2 makes better use of the TCP/IP protocol – it uses a single, long-lived connection, so connect times and slow-startup and Nagling and the like are all but solved; and it supports interleaved messages on the one TCP connection, so head-of-line blocking is gone – so as such several application-level strategies and hacks are no longer required. In fact, they may even be detrimental to efficient use of HTTP/2.

  • With no connection costs or H-o-L blocking, concatenation tricks like spriting and inlining are made redundant. In fact, because cached subresources may become stale at different times it may even be more efficient to separate the subresources, to let them carry their own freshness metadata.

  • By eliminating H-o-L blocking, resource ordering at the application level will benefit from a new strategy.

  • Interleaved messages mean that application-level bandwidth hacks, like parallel TCP connections and sharding, are no longer desirable.

While not a requirement for running a HTTP/2 server, these changes do mean that applications should be reconfigured or rewritten to take advantage of (or not be worse off for) running over HTTP/2 transport. It also suggests that the same application should probably not be run on both HTTP/1.1 and HTTP/2 transport stacks – not if you want it to run well on both.

Explicit Changes

HTTP/2 also introduces a number of knobs and dials meant to be twiddled by the application – not just server-wide settings, but dials that tune the way pages, messages, even individual headers are transmitted.

Priority

HTTP/2 allows resources to be prioritised relative to each other within a single session. For example, a browser can suggest to the server that javascript and CSS resources should be delivered before images. The decision to prioritise resources comes from the application, but the machinery that communicates those decisions is nested inside the framing structure of HTTP/2. To make use of resource prioritisation the application has to know that it’s being carried over HTTP/2, and be able to twiddle the “priority” knobs (or read the received values) in the HTTP/2 transport machinery.

As these priorities are just hints there is no requirement on either application to support them, but to be useful they have to be understood by both applications.

Server Push

A much bigger deal, HTTP/2 introduces the ability for a server to send a response to a client without first receiving a request for it. This means, for example, that a server can notify a cache (on the network or inside a browser) that a resource has been updated – and send the updated version – before the browser tries to (re)load it.

This invokes clearly new semantics, and replaces existing application hacks like long-polling. It requires a server application to decide when a resource should be pushed through the HTTP/2 transport machinery, and it requires the client application to know how to deal with the resulting pushed resource.

Server push can be automatically disabled by the HTTP/2 transport machinery (even though it defaults to “enabled”) so the transport can act as the application’s advocate in saying that it is not supported, but the transport machinery needs to offer the application an “on-off” toggle as well as the interfaces necessary for pushing/receiving resources if server push is to be used.

Unsafe Headers

One transparent improvement HTTP/2 adds is the ability to remember and “replay” headers from request to request. This means oft-repeated and bulky headers (for example ‘user-agent’ and ‘cookie’) only need to be sent in full once per session (until they change.) It is a powerful and efficient compression strategy.

The header compression specification (RFC 7541) discusses potential security vulnerabilities this strategy might introduce; to help reduce the risks it allows for specific headers to be transmitted uncompressed. However it’s the application that knows what headers might be at risk, and which are safe to compress; so the HTTP/2 transport machinery needs to provide dials to let the application tell it which headers to/not to compress.

There are other proposals in the works as well, like cache digests and compression dictionaries, which transport application-level information about cache state and content-types using transport-level machinery.

So while HTTP/2 was meant to be a drop-in replacement for HTTP/1.1’s transport, realistically we have to rethink how our applications are structured and redesign them to take advantage of what HTTP/2 has to offer.

]]>
Thu, 10 Nov 2016 11:16:06 +1100 ffda989a4dc3c4f618c0a909a23ef3be
HTTP/2 Gzipped Data (2016-06-01) https://matthew.kerwin.net.au/blog/?a=20160601_http2_gzipped_data HTTP/2 Gzipped Data

For two years now I've been working on an extension for HTTP/2 that introduces a mechanism for applying gzip encoding to data transported between two endpoints in the Hypertext Transfer Protocol Version 2 (HTTP/2), analogous to Transfer-Encoding in HTTP/1.1. [1] It's gone through a few pretty serious revisions in its relatively long life, but I'm pretty happy with where it is and how it reads right now – I think it's about ready for publication as an RFC. However I don't know if it ever will be.

HTTP/2 is, for all intents and purposes, HTTPS-only. Even ignoring political drivers like the https-everywhere movement, HTTP/2 is HTTPS-only on the open web. Mostly this is because HTTP/1.x has been around for a looong time, and some middleboxes out there have pretty questionable code paths. One of the big assumptions has been to assume that all traffic on the web would look just like HTTP/1.0. And for some twenty-odd years that assumption has held. Various proxies and gateways have made this assumption, in some cases running successfully for decades (even since HTTP/0.9) and freely peeked at or modified any HTTP traffic that passed through them. However HTTP/2 is a big break; for the first time HTTP traffic on the web doesn't all look like HTTP/1.0 – it's now packed in some incomprehensible binary formats and uses built-in compression. Any of those old middleboxes that tries to read the stream of data would at best be confused, at worst crash (bringing down internet access for some number of people – which is not what we want a new HTTP protocol to do.) And any of those old middleboxes that modifies data could royally screw things up, by whacking unpackaged bytes in willy-nilly. The traditional way to foil those invasive boxes was to ship the traffic on a different TCP port. HTTPS, while also using SSL/TLS to encrypt all traffic (including metadata), runs by default on port 443, where plain HTTP runs on port 80. In this way the traffic side-steps around the naive old middleboxes by completely avoiding their ports, and anyone listening on port 443 knows that all they'll see is a garble of binary guff (so there's no point trying to read or modify it.)

One of the big goals of HTTP/2 was to make the web better for a lot of people, invisibly. This means all the improvements you get from the binary packing and compression should continue to work on existing sites with existing URLs. By continuing to use http:// and https:// URLs, we're also committed to using TCP ports 80 and 443. And since those old meddling middleboxes are still out there, screwing up port 80 traffic for everyone, port 443 (∴TLS, ∴HTTPS) remains the only viable option for carrying HTTP/2 traffic on the open web. 😔

This doesn't mean that HTTP/2 can't be carried over a cleartext port-80 channel, just that it might not work in the big dark cloud, and none of the major browsers will bother trying.

Compression can break encryption. There's a fair bit on this out in the web, especially if you search for the "BEAST" or "CRIME/BREACH" attacks, so I won't delve into it myself. The HTTP/2 spec is pretty clear on its position regarding compression of data within an encrypted channel:

   Implementations communicating on a secure channel MUST NOT compress
   content that includes both confidential and attacker-controlled data
   unless separate compression dictionaries are used for each source of
   data.  Compression MUST NOT be used if the source of data cannot be
   reliably determined.  Generic stream compression, such as that
   provided by TLS, MUST NOT be used with HTTP/2 (see Section 9.2). [2]

That last sentence, and peoples' general attitudes towards compression since BREACH, are what give my draft troubles. Some could argue that I'm trying to provide "generic stream compression" which is expressly forbidden; however the way the paragraph reads – and the fact that the referenced Section 9.2 is all about TLS compression – suggests to me that it's "generic TLS stream compression" that is forbidden, the proscription doesn't apply to cleartext HTTP traffic. The absolutist language in the spec is possibly a hangover from an earlier draft, when cleartext wasn't to be supported at all.

Early versions of SPDY (from which HTTP/2 is derived) and early drafts of the HTTP/2 spec included a "COMPRESSED" flag on DATA frames – very similar to what I'm reintroducing with my draft (but more vulnerable through its retained/reused compression state between frames) – which was yanked after BREACH. [3] That's a pretty powerful stigma to overcome.

On top of that, because the major browsers won't speak HTTP/2 without HTTPS, and since gzip compression inside a TLS tunnel is a Bad Thing™, I've lost a lot of potential implementors/supporters for my draft, and, worse, probably gained some detractors. That said, this is a feature that is queried or requested from time to time in the community ([4], [5], [6], [7], [8]), so I still retain some hope.

[1]
http://phluid61.github.io/internet-drafts/http2-encoded-data/
[2]
http://tools.ietf.org/html/rfc7540#section-10.6
[3]
https://github.com/http2/http2-spec/commit/d5a8faeaa605bdff7b96287d72913b4b742104cf
[4]
https://lists.w3.org/Archives/Public/ietf-http-wg/2014JanMar/1179.html – the post that started this all
[5]
https://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/0207.html
[6]
https://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/1489.html
[7]
https://bugzilla.mozilla.org/show_bug.cgi?id=68517#c22
[8]
https://lists.w3.org/Archives/Public/ietf-http-wg/2016AprJun/0293.html
]]>
Wed, 01 Jun 2016 23:48:51 +1000 a394cd3b4340992b51862f5edd462134
Ruby Gotchas: parsing modifiers (2016-03-03) https://matthew.kerwin.net.au/blog/?a=20160303_ruby_parsing_modifiers Ruby Gotchas: parsing modifiers

Have you ever come across this Ruby quirk?

foo if foo=1

You might expect it to return 1, but what it actually does is fail with the message NameError: undefined local variable or method `foo' for main:Object

There is a reason, and it makes a certain kind of sense. Allow me to explain.

To start: Ruby reads scripts from left-to-right, top-to-bottom.

Consider the interpreter. When it sees a word like foo it has to decide whether it’s a local variable or a function call (it already knows it’s not a keyword, constant, instance variable, global variable, etc. because of syntax rules.) If we’re in a method, the initial set of local variables are the method parameters; otherwise it’s empty. Thereafter, local variables are added whenever the interpreter sees an assignment (e.g. foo = 1)

Now back to the code: let’s step through an approximation of how the Ruby interpreter sees it.

  1.   parsed: []
      vars:   []
      code:   foo if foo=1
      cursor:^
    

    Since vars is empty, foo can’t be a variable, therefore it must be a function.

  2.   parsed: [function('foo')]
      vars:   []
      code:   foo if foo=1
      cursor:    ^

    if is a keyword, which is allowed as a modifier after a function call.

  3.   parsed: [function('foo'), modifier('if',...)]
      vars:   []
      code:   foo if foo=1
      cursor:       ^

    foo= is an assignment, so we can add foo to the list of variables.

  4.   parsed: [function('foo'), modifier('if',assign('foo',...))]
      vars:   ['foo']
      code:   foo if foo=1
      cursor:           ^

    The right-hand side of the assignment is an integer literal.

  5.   parsed: [function('foo'), modifier('if',assign('foo',1))]
      vars:   ['foo']
      code:   foo if foo=1
      cursor:             ^

    We’ve hit the end of the input; we’re done.

If I were to convert that ‘parsed’ input into an unambiguous canonical form, according to standard precedence and actual execution order, it might look something like this:

tmp = (foo = 1)
if tmp
  foo()
end

We can verify that the interpreter is reading that foo as a function by actually creating said function, and demonstrating that it’s being executed:

def foo()
  :bar
end

foo if foo=1
#=> :bar

To “fix” it we have to tell the parser it’s a variable, by assigning beforehand. We could do this the ugly way:

foo = nil
foo if foo=1

...but a better way would be to be more explicit. Assignment in a condition is a bit dodgy (and Ruby even spits out a warning saying “found = in conditional, should be ==” – a strong hint that this is something to avoid), and in this case in particular, the intention of the line is a bit unclear. We can simultaneously fix it and make it more prosaic, without adding too much verbosity, thus:

foo=1
foo if foo

As a rule of thumb, only use simple predicates in modifiers.

  • Ruby reads left-to-right, so you should write your execution left-to-right. (The code at the end of the line shouldn't execute before the code at the start*.)
  • It’s hard to understand complex conditions, and we rarely expect them to have side-effects.
  • It’s not that bad to capture the result of a complex operation in a temporary variable and test that.
* except for function arguments, of course]]>
Thu, 03 Mar 2016 11:42:00 +1100 f548f159c82a045415df7d3a4fb1f143
Vlogging (2014-10-14) https://matthew.kerwin.net.au/blog/?a=20141014_vlogging Vlogging

Hi everyone, I'm Matty. Welcome back to my blog.

First up, my apologies. Today I am home from work sick. I had a pretty rough night last night, didn't get much sleep, still very far from 100%, so I thought, wouldn't this be a perfect time to make a recording of my current state and present it to the entire world to be recorded for all time?

My last post (if you didn't catch it, you should, it's great!) was about my To Do list, of things I wanted to be working on. Unfortunately since that post I have done very little from the list. The reason being: I finished up my two weeks' leave and went back to work. Turns out work takes up a lot of my programming mana. But I have remained cognizant of my list of projects, and of the fact that they're not being worked on. So to that end I have begun to sketch out a diabolical plan. What I need, I have decided, is a strong routine. One that includes time especially devoted to my projects.

Later this month I will celebrate my 34th birthday. I've set myself the task of keeping a vlog recording an entry every day of my 35th year. No particular reason, except that it seems like it's something I could do, and that I could enjoy. Mainly, though, it's because it gives me an excuse to set up a routine.

I'm going to draft up a schedule so I can make sure I have enough hours in my weeks to do the things I want to do while still being there for my family – because, of course, that's my number one priority.

Actually I'm going to do up two or three schedules so I can work around my lovely wife's work roster, switching between day shift and night shift versions as appropriate.

I'm also hoping to recruit my family's help because, of course, I'm not going to do any of this by myself, and not in isolation. I may even have the odd guest star from time to time throughout the year.

And so onto this post which you may have noticed is in video form. This is my test run, to get some experience learning how to video, and to get some more practice on the YouTubes. I've decided, for this one at least, to use a real camera. I may switch back to webcams throughout the year I don't know. I've also discovered through much, much trial and error that I really, really do need a script. I have sticky-taped it to the bottom of the camera because I'm professional. Maybe my ad-lib powers will develop with practice but as of right now they're pretty crap so script it is. Also makes it easier to do the transcript.

So yeah, watch this space. There might probably be many, many more videos to come. If there's something about which you'd like to hear me talk or if you have some praise (or criticism) please leave a comment in any of the places in which I will be linking this video. And yeah you'll see me next time. I'm Matty and this is my vlog.

]]>
Tue, 14 Oct 2014 19:50:00 +1100 92f1dd3e747f64e9a8f1c68bf8003c30
Projects (2014-09-22) https://matthew.kerwin.net.au/blog/?a=20140922_projects Projects

Hi everyone, I'm Matty. I've recently learned that there's this thing now where you can play audio on your computer, and not just that, you can actually play it from the internet. I thought this sounded intriguing, so I've decided to see if it's possible to record my own audio, and have that be played on peoples' computers, from the internet.

I don't entirely trust it, so I will also provide a traditional written transcript on my blog. If you are hearing this somewhere on the internet other than my blog, the blog itself is matthew.kerwin.net.au/blog

So, what am I here to say? I thought I'd give a bit of a run-down on the projects here on my To Do list, where they're at and where I'd like to take them.

First up is the IETF stuff. If you've seen my blog you might have noticed that I'm interested in development of the HTTP/2 spec; that is the proposed protocol that will hopefully revolutionise the way our computers talk to the interwebs, the idea being to make it better in such a way that nobody even notices. Because who ever actually notices things getting better? I've had some pretty rewarding discussions and debates with various folk in the working group, occasionally vented my spleen a little bit here on the blog. The working group mailing list has mostly quitened down at the moment, apart from an ongoing debate over whether or not HTTP/2 should list its requirements for algorithms in TLS or something. I don't really know, because I was never planning on doing encrypty stuff anyway, because I don't care. I think the quietness, aside from that, is because people are working on polishing implementations. So to that end, my project list includes implementations. For myself, I have a rudimentary C library that I'm slowly working on, which is intended to provide HPACK which is the header compression magic stuff in HTTP/2 -- it currently does Huffman encoding, and I'm working on doing all the rest, like delta encoding, all that. Should be fun. That one's a bit on the back-burner at the moment, because I've also decided to take on a more challenging -- and, probably, more likely to actually finish -- project, of updating Ilya Grigorik's Ruby http-2 library gem thing. I've been plugging away, making incremental pull requests, and I even got rake and rspec working on my home PC. I'll link to the github repo in the transcript [http-2], if you're interested in having a look.

So, that's a thing I'm doing, and it's moving along at its own pace.

Still on the IETF and back to the HTTP/2 spec itself, I currently have another little side project underway, to draft an extension (or two) to add optional compression to the HTTP/2 protocol. I'm not even going to start on why, but to say Content-Encoding? Really? Yeah, again I'll put some links on the transcript [internet drafts] if you want to have a look at that.

Away from the IETF and HTTP now, if you've seen the blog before you might also have noticed that I like video games. Sadly, I'm not much of a modern gamer per se, because buying a current gen console, or even a previous gen console, or a PC with DX10 or 11 is still quite a little bit beyond my acceptable spending range. Yes I'm still running DirectX 9.0c in WindowsXP on a 32-bit machine. Because I hate my life. I'm right into the community, and I follow the new releases, and watch heaps of Let's Plays and reviews and all of that... I just don't get to actually play the games myself. As a brief diversion, the other night (actually quite a while back now that I think about it) my wife and I were watching Good Game on ABC2, and they were interviewing a bunch of games industry people for something, and up popped Zorine Te. And I whent, "huh, that's Zorine, what's she doing on on Good Game? She works at gamespot." And then the subtitle popped up saying "Zorine Te, Community Manager - Gamespot AU". And my wife looked at me with a bit of wonder and surprise, and said, "Most of the real serious gamers I know play a lot of video games, I didn't realise that you were so into it. I knew you talked about liking games and all that, but I didn't realise how into it you were." I just looked at her and said, "We can't afford it."

So, yeah, that's my anecdote... but back to the projects. Not the New York City Housing Authority projects; I mean my list of things that I'm working on, or not.. you know what I mean.

(Sorry.)

I have decided to finally download Unity3D, and see if I can get back into development, or let us say, just get into development. Back at university, in the old days, (oh God, I started university when the year started with a 1)... back in the olden days I used to program OpenGL demos in C. Yep, I made spinning boxes, and... stuff. I dreamt up awesome AI strategies, which some of which even made it to the blog, and played with things like overlays and shaders and other hot new tech, which was hot new tech at the time. Unfortunately since then the industry has moved on, and I have actually, if anything, moved backwards. I haven't touched interactive graphical programming for like eight years or so now. Wow. Also I never bothered learning DirectX; because, well, OpenGL was good enough for John Carmack... So now today I wouldn't even know where to begin. So a platform like Unity is perfect for me; I can leave the fancy new stuff to them as knows better, and I can focus on the things that I want to get right, which is the core gameplay. The first thing I want to build is a game for my kids. Why not? Based on one I vaguely recall from my own childhood; at schoole we used to have an old green-screen Apple ][ or something, and the game was called "Farmer Jane's Ponds" or something like that. basically, there was a frog pond with lily-pads in a fixed repeating pattern; you had to press the arrow keys to program a path for the frog, and then set it on its way, and see how far it got before it fell off and drowned. Because frogs can't swim on Apple ][ games, apparently. Anyway, I want to make that game for the girls. Whether or not it's a true representation of the one from however many years ago that was, I don't care; I want to make a game that I think will be interesting, hopefully the girls will like it, hopefully I will actually build it. Unity may have just completed downloading -- it has, Unity has just finished downloading, so that project hasn't gotten particularly far yet, but it's on my list.

There are other things that are much more trivial, minor projects that aren't really worth talking about; also things that are so grandiose and astounding in scope that they will never get anywhere, so I'm just ignoring those ones. But the very, very next thing I'm going to do is wrap up this recording, and work out how to make it get into the internet. Then I'll type up the transcript, and publish it. This may not be the last time you hear my lovely voice and accent, if I happen to feel enthused in future.

Until such a time, goodbye, I guess. My name's Matty, and this is my blog.

]]>
Mon, 22 Sep 2014 16:00:50 +1000 e02afaea379c54c4ffa5127d1bf90d14
Eggshells (2014-05-30) https://matthew.kerwin.net.au/blog/?a=20140530_eggshells Eggshells

There are two game reviewers who I consider among my favourites at the moment – I connect with their intellectual and artistically aware approach to analysing and describing the gaming experience. And it turns out one is openly gay and the other is transgender. I didn’t know or realise at first (yes, I thought one had an unusual voice; should I have had any reason to read further into it than that?) I don’t know that knowing has changed my opinion of either of them or, more importantly, their reviews, but the exposure to “something different,” plus that #YesAllWomen thing, plus a bunch of other stuff lately rippling through my circles, has made me much more acutely aware of my own thoughts and actions.

Before I go on, I want to stress that I don’t want this to be a “poor me for being privileged” post. I know I’m privileged (and I’m extremely grateful, in a selfish way.) What I don’t know is whether or not I’m ignorantly misogynist. If I am, the first step has to be curing the ignorance – so please, call me out.

That said, my recent awareness has made me... not afraid, definitely not that, but nervous. It’s a tired old metaphor, but I feel like I’m walking on eggshells. Traditionally I’d have been conditioned to completely ignore those shells, that the occasional (or frequent, or ubiquitous) crunching sound was an expected part of walking around. Maybe even something to be celebrated. It would have been ludicrous to consider that someone might be upset by, let alone complain about, them breaking. But it turns out that, to push the metaphor further, I like these eggshells. I believe that they have a value in and of themselves, and that it improves the world to have them around. Maybe the complaining was what made me look down in the first place and notice them, but having done so, I find that I like them. I don’t want to tread lightly just to quiet the complaints; I genuinely don’t want to crack the shells.

But.

But I don’t know about these shells. I don’t know when to step around them, or when it’s safe to walk over, and then I don’t know how lightly to tread. Worse, I may not realise some are even eggs until I’ve already crushed them – and I’m afraid there could be some that I don’t realise even then.

As I said before, I don’t want anyone’s pity. That would be absurd. After all, it’s a self-imposed condition. It would be so easy to say, “Screw the eggshells” and grind them all into the ground. Sure, it would give me pangs – but hell, I’ve got centuries of precedent to back me up, and I certainly wouldn’t feel nervous about them any more. I realise that sounds like a threat, and I wish it didn’t, but honestly, it’s there, and it’s a real factor that has to be taken into account in the whole issue. Fortunately I’m just one little man, so my backsliding wouldn’t account for much (at least, not for those who don’t know me personally) – I’d still prefer to avoid it, though. So no, I don’t want any pity, but I do want people to know how I feel, and to appreciate that I’m making an effort; but, more importantly, I want people to help me learn about the eggshells: show me how to recognise and avoid them, and definitely call me out when I break them. It might turn out that I don’t actually care about all of the shells, and go on breaking some, but I’d like not to be counted among the worst.

Update 2019-09-08: a part at the end of this post has bugged me for five years. I never managed to make it say what I wanted it to, so I've deleted it.

]]>
Fri, 30 May 2014 21:25:14 +1000 d811345ef849d0333e5e70769d91111b
Role Playing with Kids (2014-02-22) https://matthew.kerwin.net.au/blog/?a=20140222_role_playing_with_kids Role Playing with Kids

After playing my first D&D Pathfinder session with the new group a couple of weeks ago, I finally got sick of my old dice set constantly killing me (they didn't kill me this time, but they managed to fail every skill check of the day) so I bought a pretty new purple Chessex set.

Merry, our 5 year old, saw the box sitting there and decided she wanted some. She tidied her room, twice in one week, so she could get some pocket money, and today we caught a train into the city specifically so we could go to Mind Games and buy her her own box of dice. She got a translucent purple set with white numerals[*].

Then, when we got home, both girls forced me (forced me, I say) to invent, on the spot, a simplified version of D&D and an adventure. A couple of pages of 1" grid paper, half a dozen Lego minifigs, and some quick guestimation later, we're rolling "the big die" to try and beat people's "shield number", so we can roll the "other die" to take away some of their "health scores." And we're describing our actions, and taking turns deciding on strategies, and listening at doors, and exploring behind waterfalls, and rescuing imprisoned NPCs. It was a blast.

The first time Merry came across two monsters in a room, she stepped right between them and just said, "come and hit me." End of turn. She had no idea, but I gushed with pride. I was instantly invoking taunt- and challenge-style mechanics in my mind. It was impressive to see someone with no gaming experience immediately playing the part of a tank. Incidentally, she was the one with a sword, helmet and shield, while her sister had the longbow gun. They faced the monsters down, tank in the doorway, ranged attacker shooting from behind. At the end of the encounter Merry intentionally disarmed the last monster, then they captured him and were leading him back, possibly to kill him in more comfortable surroundings ("I want to stab him in the heart", "No, let's make him sit in the pot of boiling water!") so he escaped. They were shocked that I'd let him get away, but dude, seriously...

At one point Bree decided she wanted to shoot two baddies who were standing in a straight line, so I gave her the shot (I treated it like a 4e daily, although I hadn't planned for such a thing to exist in this system), and she made it! She was so happy to have come up with a cool strategy, and then have it pay off.

In a later encounter, when a monster started attacking her sister, Merry stepped square into the middle of the fray and took a big swing at all the monsters around her. She missed the one who'd attacked her sister, but I figured that was in itself a pretty impressive manoeuvre so I had him break off his attack.

They fought their way down the cave system, discovered the throne room, eventually defeated the evil wizard and his minions (even in spite of his awesome "take 5 from your shield number" debuff power), looted corpses for gems and a key, unlocked a cell and freed an NPC prisoner, and set themselves up with a new secret lair behind an underground waterfall.

I think they had a good time. I certainly did.

]]>
Sat, 22 Feb 2014 20:04:10 +1100 c995d01ba72d455bb9f3a9cc04b3f48b
The Only Way is Sideways (2014-01-13) https://matthew.kerwin.net.au/blog/?a=20140113_the_only_way_is_sideways The Only Way is Sideways

Warning: lots of "me" and "I" in this post. It's about me, and where I am, and where I'm going. Hopefully it still works if you change all the "me"s to.. er.. "me." I mean, apply it to yourself. You know what I mean. Or what you mean.

I am at the end of the tail of Generation X; I was in the fourth grade when the Simpsons hit our TV; when I started high school we had Nirvana and no world wide web, and when I finished it was the reverse. So now I'm in my early thirties (I'm even in my 0x20s) and I've been programming in various guises since before the turn of the millennium. For the past decade and a half I've been called a "Head Programmer" and a "Lead Developer" and a "Senior Software Engineer" and even a "Senior Web Developer." I'm at the top of my profession.

The problem is: I'm in my early thirties, and I'm at the top of my professions. And I'm not exceptional; the top here isn't very high. My current employer is quite happy to pay for me to attend training courses and gain further qualifications and progress my career, but the only courses available are things that lead towards management. I'm currently eyeing off a Cert IV in Project Management, which is relevant for a senior developer who does project work (i.e. me,) but that's – after my rudimentary ITIL qualifications – kind of it. My next professional step is up through team leader/supervisor, to section manager, and on through the pointy-haired ranks. I don't really want to be pointy-haired. I want to be a coder!

This reminds me of an anectode about my supervisor while doing my honours thesis (yep, I even did that – Bachelor of Science (Computer Science) with Honours.) I can only remember a couple of things my supervisor told me the whole year I was "working under" him: once he told me I was going to be a bad father; and another time, when he saw that my work was focused on implementation and didn't contain enough theory for him, he said that I would "only be a programmer" (the emphasis is mine – he said it dismissively.) You know what? I like being a programmer; there's no "only" about it. I saw his code, it was, to put it bluntly, shit. What is a highfalutin academic computer science research without programmers? Where do all your fancy algorisms go? Who implements your esoteric ideas so they actually have some real world value? Computer scientists work for programmers, not the other way around!

Sorry, I'm digressing, but the sentiment remains. Why are programmers less? Why does programming have to stop here? Why do I need to touch less code as I move further up the ranks? Actually, that's a misphrasing, because it's not the code I like, per se; I'm equally happy to design structures and implement patterns and solve problems on a white board or in a discussion or in my scribble book or on a keyboard, which is why team leadership isn't so bad in and of itself, except that it feels like the first step on that long journey away from getting things done, towards middle management.

The options before me seem to be:

  1. do nothing – stay here where I am, slinging the same code for the same pay, and be happy with my work but maybe not so happy with my life (especially as my kids hurtle inexorably towards their teens, and all the damned expenses that accompany teenage girls...)
  2. grow pointy hair – shuffle diagonally upwards, moving further from where I want to be at work, but at least keeping the payscale in line with where I'd like my lifestyle to be; or
  3. start up – most of my peers who've made it seem to all have founded startups or be working for startups or upping starts, or things of that bent. I might be happy to go this way if I had an idea that I felt was workable, but it's no good saying "I want to start a startup, what's an idea I could chase?"

Are there any other options? If you're a programmer type, where are you and where do you see yourself going? If you're not, but your profession has a similar hard ceiling, let me know, I'd like to hear about it. Even if you just have some random advice or commentary, drop a line and let us hear read it. Add to the Disqus discussion below, or comment on Google+, or even @tweet me. Whatever.

]]>
Mon, 13 Jan 2014 12:48:52 +1100 b42578742faf2ca2cfaaeba898385dc5
2014 (2014-01-07) https://matthew.kerwin.net.au/blog/?a=20140107_2014 2014

Christmas is over, the new year is wearing big-boy undies, and after today our household will be back to the regular number of human inhabitants. So I'm going to resume blogging. The problem is I've been out of the loop for a couple of weeks, so I don't know what to write about.

One of my standout gifts this year was a printout of an email with details of my three day passes to PAX Aus 2014! I had a genuine emotional response when I opened it up, which hasn't happened to me for many, many years (if it ever happened at all.) Thanks Shell 😄 It feels a bit weird having tickets to an event when the schedule hasn't even been announced. Nevertheless I'm enthused. And it's the week after my birthday, so Melbourne should be pretty hospitable.

I should build a Space Marine power armour suit.

In other news, not too long after I moved down here I signed up to meetup.com to see if I could find a gaming group. The most interesting meetup I found was someone basically sending out a call to anyone who'd be interested in a 3/3.5ed campaign; I signed up, and didn't hear much more. However in the past couple of days two others have expressed an interest as well, and four means a party of three, which is bare minimum, so it looks like something could start happening on that front soon! We might end up playing Pathfinder. Hopefully could be fun.

]]>
Tue, 07 Jan 2014 14:07:46 +1100 228bcc421856a1105ddaa1c3a0e2255e
The 'file' URI Scheme (2013-12-17) https://matthew.kerwin.net.au/blog/?a=20131216_file_uri_scheme The 'file' URI Scheme

About six months ago I first created a (rubbish) Internet Draft – in the naive hope that it would be relatively easy to usher through the IETF process and be published as an RFC – to revive the 'file' URI scheme. I did it partly because it came up on the ruby core bug tracker that since RFC 1738 was made obsolete* there is no current spec that defines 'file' URIs†, and partly because I really just want to see my name on an RFC.

* Paul Jones (the web-finger guy) made a pretty good summary of his interpretation of the obsoletion, and I agree, but in RFC land "Obsolete" means "Obsolete", and there's not much getting around it.

† There's RFC 1630 which is Informational (i.e. not a standard).

I sent a message to the IETF Apps WG, and there was a bit of discussion (including Paul Hoffman telling me that he gave up on the original effort eight years prior, but "Good luck, seriously.") and it was mostly positive and constructive, but still a bit hard. Well, maybe more daunting than hard. So I put my head down and read a lot more of the history of the scheme, and the vagaries of the various implementations, not to mention the way RFCs tend to be written, and the IETF publishing process in general. And I kept at it for about six months.

Actually it's just shy of six months; I just got an email from the IETF Secretariat informing me that version 00 of the draft will expire in 5 days 19 hours, on the 22nd.

Anyway, when I published version 09 the other day I sent an email to the W3C's URI Interest Group mailing list, and the response was really heartening. A lot more people had a lot more things to say, offering plenty of critiques and suggestions, even suggesting editorial improvements. I figure, if they're copy-editing my words then the content must be pretty alright. It's even sparked a couple of tiny discussions, about whether I should instruct the reader to "note" things, and whether Mac OSX file systems use UTR15 NFD.

I've added most of the comments and suggestions to the issue tracker on github, and hopefully version 10 (or possibly 11) will resolve them all, and I'll be able to take it back to the IETF with some confidence that it will move forward, more or less, towards consensus and maybe even eventual publication.

I'm feeling enthused! I want everyone to read it, and point out ways to improve it (the easier the better; pull requests will be appreciated), and help me keep it going and maintain this enthusiasm.

Together we can provide a spec that codifies what everyone's been doing for the past decade anyway, and I can get my name on a published RFC!

Incidentally, I almost wrote 'Osbolete' means 'Obsolete', which would have been amusing.


In unrelated news...

For those playing along at home, on the gaming scene, I've started playing Terraria. I've had it in my Steam library for quite a while, and tried it out very briefly once or twice. Mostly it was a freaky Zork sim where it kept getting dark and things would eat me.

However the other day I saw Nerd³'s video about Starbound, and he kept saying how it was rather similar to Terraria, but he kept doing strange things like opening menus and crafting stuff, and I realised that there's probably a bit more to the old game than I realised. So I fired it up and fumbled about until I worked out that Escape brings up a mega inventory menu thingy, and you can actually make stuff, and stuff. And on my second world I managed to trap that starting NPC guy in a hole in the ground, and build a house around him, and he hasn't been eaten by any slimes yet! And he's the one who told me I could use two lenses to make a pair of goggles (who'da thunk?) And there's also a merchant fellow who refuses to pay me for my clods of dirt. And I made a forge and a sawmill and a loom, and I'm going to unlock all the things! and power myself up and work out the deal with these "bosses" I keep hearing about, and... yeah, it's quite fun.

Steam tells me I've played about 10½ hours in the past week. And it's Monday. I guess that means my effort to rekindle my gamering love has worked.

Sorry family.

]]>
Tue, 17 Dec 2013 01:37:15 +1100 4e088147b2655a7866c94dd9e45a4de8
Painting Sheds (2013-12-09) https://matthew.kerwin.net.au/blog/?a=20131209_painting_sheds Painting Sheds

At what point does the colour of a shed become important? That PRISM jibe in the HTTP/2 draft keeps grating, every time I run into it. I know it's only five bytes, in a (usually) binary stream, and it doesn't matter which five bytes they are as long as they don't collide with any existing (or theoretical) HTTP/1.x method. But come on, guys, SIGAD US-984XN is so June 2013.

Seriously, though...

Let me digress for a little history of this particular bike shed:

Originally the spec said "SPDY", referring to the original SPDY spec on which HTTP/2 is based. It was updated to look like a HTTP/1.x query that would hopefully cause any HTTP/1.x server receiving it to barf something about 501 or 4xx or something that would clearly tell the sender that HTTP/2 is not spoken at this establishment. The method and request body in that pseudo-request were given as FOO/BAR, which is a pretty yucky shade of hospital green, but it serves as a functional undercoat, and it keeps the rain out of the wood.

Then it was cut down to FOO/BA because that made the whole message 24 bytes long (which is a multiple of 8, which is nicer for implementers). Still a yucky pale green, but one that's easier to reproduce.

Then there were some complaints about using "FOO" and "BA(r)" in an IETF spec (I was among the complainers – what are the odds that some gonzo implementations of HTTP/1.x out there don't already support debuggy/backdoor FOO methods?) We tossed around a couple of strings, focusing on the fact that they should not be mistaken for HTTP/1.x by any dodgy implementations (e.g. ruling out "CON" because it could be taken as the start of "CONNECT", etc.) And eventually we settled on STA/RT. Nice, neutral earthy tones with a complimentary blue trim. And all was good and happy.

For a day.

Then someone who shall remain nameless (but is known online as martinthomson) decided to make a statement and/or topical joke about the NSA and spying and Edward Snowden and all that, by changing the magic bytes to PRI/SM [Note: scroll down to Ilya's comment on that commit]. Haha, PRISM. Yeah, we get it. So now there's a big cock-and-balls crudely spray painted over the side of the bike shed. Which, incidentally, has been included in a couple of implementation drafts now, so there are more and more repositories out there in the wild that include it, so it's getting hard-coded in more places, and gaining more momentum, and soon enough that silly cock-and-balls graffito is going to be so entrenched it will become standardised, and we'll be stuck with it.

But I can't complain, because that would be "bike shedding." Surely there must come a point where the colour of the shed is important. Trivial details in a spec are still details in the spec, and this one is part of a MUST-level interop requirement, so absolutely everyone who implements or interacts with HTTP/2 is going to have to reproduce (or at least look at) that cock-and-balls every time they delve into the spec.

So, a question: do bike-shedding details ever become important? And if so, when?

Please answer in the Disqus comments below, or on Google+, or even on twitter.

]]>
Mon, 09 Dec 2013 13:33:20 +1100 228a74ae5906dedbb2608a30253bd63c
Becoming a Gamer Again (2013-12-02) https://matthew.kerwin.net.au/blog/?a=20131202_becoming_a_gamer_again Becoming a Gamer Again

I am on a mission. I have a quest. My goal is to rekindle my love of games and gaming.

To do this, I've opened somewhat of a time capsule in my brain, stepping back down memory lane to my earliest PC gaming experiences.

I need to point out right away that I didn't start PC gaming until I was about 14, in 1994-5 or so. My friend Greg had a 386 PC (later upgraded to a 486), on which we used to play countless hours of Jetpack and Wacky Wheels and Scorch. Another friend, Glen, had Prince of Persia on his PC, but I never played it enough to get any good at it.

A couple of years later I borrowed enough money to buy my first PC – a second-hand Pentium 100 from one of my other friends, Cookie. It had 32MB of RAM, more than enough to play Quake with the full "-heapsize 16384" or Grand Theft Auto (when it came out, and we acquired it) from inside Windows.

Before that, though, I'd played the usual '80s educational games at school, like Farmer Jane's Ponds and Where in the World Is Carmen Sandiego?, and Apple Logo (not technically a game, but I played it anyway!)

And of course, it being the 90s, I was already a fairly well established arcade and console gamer by then:

  • I'd spent several hours playing Super Mario Brothers and Adventure Island on my best friend Michael's NES.

  • I bought my Sega Master System II at some point in the early '90s. My gods, I was so excited. I'd saved up for over a year to get half the money for the console (Mum matched me dollar-for-dollar), and we had to travel all the way to Cairns to buy it, and then we visited some friends while down there and all I could do was sit there studying the box of my new game console, imagining what it would be like to play all the games on the back, impatiently waiting until we could finally get home and I could set it up and plug it in and play it!

    I ended up playing a lot of Alex Kidd and Sonic 2 and Bart vs. the Space Mutants (gods that was a hard game), and repeatedly renting Mortal Kombat and Chuck Rock. Remember when game rental was a thing? At the video store?

    I never quite finished Alex Kidd despite getting to the Janken rock-off a couple of times, and I don't think I ever got close to finishing Bart vs. the Space Mutants. I repeatedly clocked Sonic 2; it took about an hour to play through IIRC.

  • During the mid-'90s I sunk lots (and lots, and lots) of money into Mortal Kombat machines when an arcade opened up around the corner from my house.

  • And then I, somewhat unexpectedly, managed to score myself a Mega Drive II and my own TV, at the same time, subsequently sinking years of my life into MK3 and repeatedly renting NBA Jam T.E..

    He's on fire! Boomshakalaka!

  • Some friends of the family also had a Mega Drive, and Ecco, Bubsy, and Desert Strike feature strongly in those memories.

  • Sometimes we'd sleep over at Greg's, and Sabin would bring his SNES and we'd play Bomberman all night, or have our arses handed to us in Killer Instinct (C-c-c-c-c-combo Breaker!) Sometimes at Sabin's place we'd sit for hours and co-play through Secret of Mana.

Oh man, glory days.

But back to the PC...

When I bought the computer off Cookie it already had Quake installed. So, of course, I played that. A lot. A really, really lot. I was the first of my friends to include "+mlook" in my autoexec.cfg. I installed the Omicron bots, and trained myself up to pwn them in deathmatch. I finished the original game (did anyone else find Shub-Niggurath to be a bit of a let down? You play for hours, against harder and tougher and scarier monsters, and then you beat the final boss with a telefrag? (SPOILERS!) I was a bit "meh" when I eventually worked it out.) and both Scourge of Armagon and Dissolution of Eternity. Greg and I got our 28.8k modems talking long enough to play almost 10 minutes of DM from our own homes one time. I used to beat my friends in deathmatch using only the axe. I vividly recall one time snatching a jumping bot out of the air with the lightning gun and pinning it to the corner of the ceiling until it gibbed. I got a QuakeC compiler and created a mod, with things like laser pointers and monster-seeking missiles and teleporting rockets&grenades and AI that prefer to fight different types of monsters over the player. I lived Quake, and breathed lightning bolts, and rocket-jumped like a pro.

Real time strategy was a hip new thing back then, too. Sabin's RTS of choice was Warcraft, but most of us played C&C and then Red Alert. Hours of fun an shenanigans were had. It was heaps of fun at a LAN party, with everyone sitting around a big table, when everyone's SoundBlaster speakers would announce in turn: "Nuclear launch detected." Everyone would mutter, glance annoyedly at the one person who's speakers hadn't emitted the warning, flitter nervously around their base, then laugh when one of us groaned at their dudes being burned alive. Also: a convoy of well-pathed harvesters is absolutely devastating against an infantry formation.

And between the FPS and RTS, I still made time to play Jetpack and Scorch, and Worms Plus+Reinforcements. (Oi! Nutter!)

These memories are fifteen or twenty (or more) years old now, but you can probably tell from my writing that they're still vivid, powerful, very happy memories. This was the gaming of my youth (before I moved out and Half-Life came along and changed (my view of?) the entire gaming landscape.) If I want to resurrect my love of gaming, I think I have to try to reawaken those feelings, and I figure a great way to do it is to play the old games.

One of my great regrets in life is letting Mum get rid of my Segas when I left home to go to university. There were only a couple of really big games, so I could probably rebuild my little library with a bit of clever eBaying (after gaining due approval from the Ministry of War and Finance, of course), but in the mean-time I already have a PC. Through the magic of XP's compatibility mode and emulators like DOSBox and my old CD boxes and various abandonware sites around the web, I've been able to reinstall Quake and Scorch and some of the other games of yesteryear (Hi-Octane is another one we used to play a bit at LAN parties; my memory of that one can be summed up as "frenetic").

I haven't gotten much gaming in yet, as such, but they're there, ready for me, and I'm definitely making the time to get back into them. As a general taste, here are two short clips of me first dipping my toes back into Quake and Scorch (with much thanks to Fraps).

The first two levels of Quake, using GLquake; my first play in at least a decade.
A single play-through of Scorched Earth, from memory, not having touched it in years.

Yes, yes, I know there's no challenge in playing against "Tosser"s, and I've since remembered that you have to actually energise the shields for them to do anything. Like I said, this was my first play in a very long time. I will definitely play more realistically in future!

Of course it wasn't just the games, it was also the people with whom and the circumstances in which I played them. So, please, join in. Tell me which games you played and loved, and which have stuck with you. Let's share our experiences in games and gaming, and see if we can't rekindle a bit of that youthful glee we once knew.

]]>
Mon, 02 Dec 2013 15:51:58 +1100 ca57cf2f853cedca082c5d5edc4992b3
Stuff, and Gaming (2013-11-25) https://matthew.kerwin.net.au/blog/?a=20131125_stuff_and_gaming Stuff, and Gaming

I'm meant to be writing another blog post. Actually, I was meant to write it a couple of days ago, but I didn't have anything to write about. And I still don't.

Last week I wrote about my disillusionment with the IETF process. That disillusionment has declined into ennui. I can't be bothered even reading the conversations, let alone taking part. Some folks are on a crusade to "fix" the web, some want to protect the children from the NSA, some are pushing their own hidden agendas, and a couple keep talking about "Snowdonia." What can you do when up against zealots and the paranoid?

I haven't really been a gamer for many years. I loved my Sega Master System II – I almost clocked Alex Kidd in Miracle World, and I used to routinely finish Sonic II – and then my Mega Drive 2 – I was the king of Mortal Kombat, MK3, and NBA Jam (Tournament Edition). In my late teens, when I managed to borrow the cash to buy my first computer, I used to own at Quake (fighting the Omicron bot when human competitors weren't available), and lose myself for hours in the original Liberty City of Grand Theft Auto; and then there were Team Fortress, Half-Life, TF2 ... And by that stage I made it back to university, where my friends and I used to play Blackhawk Down/Joint Operations and Operation Flashpoint, sometimes for days on end, and Dawn of War.

I used to love games.

Then I grew up, and got a job, and suddenly I didn't have time to play games, and (somewhat paradoxically) no longer had the budget for them.

I clung to WoW for a little while; but I wasn't that interested in what the game was becoming (grinding, endless raids, grinding, endgame fights, and grinding), and it was taking too much time from my family.

I still play Joint Ops today. I like to think I've gotten pretty good at it – nowadays I have to finish the cooperative missions by myself. And I still play GTA: SA – I have my finished save game, but I've never gotten around to getting 100% on it. I played Torchwood Torchlight (thanks Voxel) for a bit. It was really pretty, and kind of fun; but one day I exited the game and never bothered opening it up again. Nothing really drew me in. I've been watching a lot of YouTube lately, mostly gaming stuff like Nerd³ and Gamespot, and I think that's almost as good as playing games. I'm hooked on the Top Five Skyrim Mods of the Week, even though I've never been that into RPGs (I played Oblivion one time, very, very wrongly), and I really enjoy all the GTA V in-game footage I've seen, even though I'll probably never have the hardware to play it. I've even watched a few chapters of Dishonored and Call of Duty: Ghosts walkthroughs, which is almost as good as (or possibly even better than) playing them myself.

I used to be a gamer. I don't know if I could be again.

]]>
Mon, 25 Nov 2013 23:50:31 +1100 33edccb7063c0fd9f139e2ba09823372
Why Does the IETF Hate Me? (2013-11-18) https://matthew.kerwin.net.au/blog/?a=20131118_why_does_ietf_hate_me Why Does the IETF Hate Me?

I've been following the discussion on the HTTPbis WG, particularly with respect to the development of HTTP/2.0. I even contributed in a small way to some threads, and had a pull-request applied to the draft spec. I was really getting into it, and enjoying contributing to the betterment of the internet.

Granted, things got a bit boring when a couple of people started getting really pedantic about the number of bits saved when using a particular header compression algorithm (yes, bits), but at least it was a technical discussion about an aspect of the new protocol.

However on November 13 Mark Nottingham (the working-group chair) announced that HTTP/2.0 will only work for https:// URIs [Twitter, W3 Archive]

Everything kind of blew up then. Especially after slashdot ran it. The whole conversation got derailed, I won't bother repeating it here. The important bit (to me) was this:

Just to be clear, I'm a browser vendor speaking here, representing my own personal views, but those generally align with the Chromium project. And no, we don't have plans to support HTTP/2.0 in the clear. Firefox developers like Pat have said similar things.

This is bad. James Snell put it well; my (possibly hyperbolic) summary is: Google and Mozilla don't care about me. They want to do what they want to do. What I want doesn't matter.

This comment was a real kicker:

On 11/13/2013 03:09 PM, Karl Dubost wrote:
> (trimming the cc)
>
> Le 13 nov. 2013 à 15:41, Mike Belshe  a écrit :
>>      c) otherwise actively leveraging plaintext HTTP today for
>>         business or pleasure
> I'm one of this (indeed rare) person who is having a Web site, do
> not have analytics, do not have comments, or anything, do not set
> any cookies of any sort, etc. Plain HTTP works for me.

And plain HTTP/1.1 will continue to work for you, and that's a good, 
fine thing. Your simple site is unlikely to benefit much from the 
latency/multiplexing/etc improvements that HTTP/2 gives. Sites that do 
are more likely to the ones that carry user identity or other info that 
is better to keep secure.  Hence the carrot approach: use TLS if you 
want the fancy bells and whistles from HTTP/2.

The proposal Mark has laid out sounds like a reasonable compromise, and 
I suspect the other networking module peers at Mozilla feel similarly.

In other words: you aren't important. You don't get to use HTTP/2.0. You can keep using HTTP/1.1 until you're important enough to be able to afford the overheads of running HTTPS with properly signed certificates. You don't get to have a faster, more responsive site; you don't get to cut down on bandwidth costs; you don't get to play with the New Big Thing™. We don't hate you; in fact, we don't think of you at all. You are nothing.

Willy Chan's response to James Snell's question just adds to it:

... my default inclination is to tell IPP folks to stick with HTTP/1.X if they only want to support cleartext. If they want HTTP/2, then they should solve the blockers to adopting a secure transport.

In other words: you don't get to play, even if you're a big boy like HP or Apple, because there's a technical difficulty with our proposal. Oh and by the way we don't wanna fix it so we'll phrase it this way to make it your problem.

Aside: here's a pertintent response to above.

Oh yeah, and then there's this:

On Wed, Nov 13, 2013 at 7:01 PM, Frédéric Kayser wrote:

> This also means HTTP/2 is not for everyone, it's only for big business,
> and you cannot get the speed benefit without some hardware investments.
> It also means that speed consciousness webdesigners will still have to
> continue using the awful CSS sprites trick when their target server is
> still HTTP/1.1 based.
> HTTP/2 sounded like a magical speed promise… that would be quickly
> adopted, but now it just looks like an alternative solely made for the big
> guys.

As far as I've seen, most small businesses get little enough traffic that
they wouldn't notice any difference w.r.t CPU usage.
.. and if it bothers them, they'd use HTTP/1.1 for web stuff, or are
already doing so.

Fortunately Microsoft cares. We are one browser vendor who is in support of HTTP 2.0 for HTTP:// URIs. The same is true for our web server. [WG Archive] I don't care why they care, or how much it may or may not be about me personally; but they say they're going to do a thing that will benefit me, and they don't have to do that thing.

One more quote from the discussion:

Le Dim 17 novembre 2013 23:12, Mike Belshe a écrit :

> There are a million apps in the app store, and every one of them had to go
> get a cert and keep it up to date.  Why is it harder for the top-1million
> websites to do this?

Because you're not designing for the to-1million websites, you're
designing for everyone including people who think green text on pink
background is pretty and don't want their web site go down every year
because their cert expired.

Yeah! What he said! I'm one of those people!

So in summary thus far: two of the three big browser vendors really don't care about me (as a website owner) at all. They want me to pay more money so I can continue to serve my website, and until I can afford that, I don't matter.

WebFinger

So, I titled this article "Why Does the IETF Hate Me?" and so far I've only really complained about Google/Chrome and Mozilla, although it was Mark (representing the IETF) that started it. Here's something that, yes, came out of Google, but was ratified by the IETF and is now a Proposed Standard with an RFC number and everything:

RFC 7033 WebFinger

WebFinger is used to discover information about people or other entities on the Internet [...]. For a person, the kinds of information that might be discoverable via WebFinger include a personal profile address, identity service, telephone number, or preferred avatar.

In other words, it's the old UNIX finger command, but running over the web. Remember when your email signature included references to your .profile and .plan? Remember "finger me for my public key"?

Well, WebFinger is that, again, using the web.

Except that it isn't.

RFC 7033, Section 4, paragraph two, reads:

A WebFinger request is an HTTPS request to a WebFinger resource. A WebFinger resource is a well-known URI using the HTTPS scheme constructed along with the required query target and optional link relation types. WebFinger resources MUST NOT be served with any other URI scheme (such as HTTP).

Wha wha hey!? But the.. I mean.. why the hells not? Yes, I'm very late to the party complaining about this since it's already ratified, but dude, seriously. Yes, a webfinger profile might include authoritative information that a consumer might use to authenticate my identity (?)... I guess (??)... if you absolutely depend on fingering me to discover my "identity service."

But hey, here's an idea: why not just tell those consumers to not necessarily trust anything served over an insecure connection? The same way we do for the entire rest of the web.

Because "they" (and I suspect Google here, but have nothing to substantiate that) have an agenda (to make everything on the web secure) I'm now unable to play with interesting and fun protocols without paying extra money to a) a CA, to sign a certificate for me*, and b) my host, to install the cert for me (and/or upgrade my hosting package to include a https:/:443 option).

Well you know what? Screw you guys. I don't care about your stupid MUSTs. They're dumb! I'll implement a non-compliant webfinger service, that looks exactly like a compliant one, but doesn't use HTTPS.

Oh wait, I already did. Let's see how well overly-restrictive specs stand up against people just doing what they want. And let's see how that affects the sanctity of standard-defining RFCs, and the authority of the IETF itself.

* Free Class 1 certificates notwithstanding.

]]>
Mon, 18 Nov 2013 14:07:38 +1100 bf7bfa76ebe6499235f89c2adcb8e8cb
Links (2013-11-12) https://matthew.kerwin.net.au/blog/?a=20131112_links Links

I've been meaning to write my "weekly" blog article, but I cannot think of a thing about which to write. Normally if I was struggling for a topic I'd just pluck something prominent from the zeitgeist and riff on that for a bit, or show off a new bit of knowledge or understanding I've gained, or something; but there's nothing happening. The internet is chugging along just fine, society isn't catching fire, so-and-so hasn't said anything mean to whatsisface for ages. I don't know what to write.

So, here are some links instead.

Hopefully that will tide us over until next week, when I can't think of anything to write again.

]]>
Tue, 12 Nov 2013 18:32:20 +1100 94347d76f9f8ca2b121e2c9a5ffbd369
Redo From Start? (2013-11-04) https://matthew.kerwin.net.au/blog/?a=20131104_redo_from_start Redo From Start?

I was supposed to write a blog post this weekend just past, but I didn't. Normally I would be writing one right now, but my mind is occupied. There is a system we use here at the library, based on an ancient iteration of a GPL'd open-source software package. Of course our system has been customised a whole lot over the years, and has deviated from its original fork almost as much as the official package itself has, albeit in completely different directions. And, of course, it's rubbish. Well, not rubbish, but it's not great. Our requirements have shifted over time, technology has moved on, and as inevitably happens to old software packages – especially when software engineering best practices aren't necessarily at the forefront – the code has started to develop a texture somewhere between spaghetti and concrete. The only features I can add now are relatively small, or – more significantly – localised, and my most common changes are costmetic updates and bugfixes. For some reason the bugfixes never seem to end.

In short, I want to rewrite.

Before I undertake such a large ... undertaking, I want to be sure of what I'm doing. Let me enumerate some concerns and risks:

  1. I've been known to slightly overengineer solutions, especially when they can be construed as broad or potentially complex (read: fun).
  2. I also have a fairly strong case of NIHism, not (entirely) because I think other people's code is rubbish, but because I love writing code. I may (or may not) believe I can write it better, but I want to write it anyway. Coding is fun!
  3. It works. The current system may not be the best engineered or the most responsive or the easiest to update, but it's still working. It still meets our requirements, and lives up to its spec. For now. The serious question that has to be answered is: do we need to update it so much that a complete rewrite is required? Is the outcome worth the effort?
    I'm glad I managed to avoid writing "ROI" there.
  4. How much effort would be duplicated? This is similar to the previous point, but with different fore-knowledge: one of our newer (but fully capable and trusted) developers is currently working on an alternative version of the site, designed from the ground up to be the mobile web-based interface for the service. It re-uses very little of the existing code and is, I believe, built on the CodeIgniter framework. Which means controllers and models and views (oh my!), which (hopefully) means modularised development and support. Assuming it's built well, and I am making that assumption, it should be quite easy – once the mobile site is finished – to piggy-back non-mobile versions of the views onto the framework and replace the entire old site in one fell swoop. And that will also be fun.

These points are specific to me and to this project, but I believe they can be generalised and applied to any significant works. Please comment below if there is a point or concern you (or your boss) usually consider before leaping into something.

And, with those points expressed and rationalised and evaluated, I can conclude that I should not, in fact, rewrite our service's front end, no matter how much I may want to.

At least, not while I'm at work.

]]>
Mon, 04 Nov 2013 16:21:45 +1100 8846c30c0c3060a397d561e1656b119f
Global Functions in Ruby, Updated (2013-10-29) https://matthew.kerwin.net.au/blog/?a=20131029_global_funcs_in_ruby_2 Global Functions in Ruby, Updated

Yesterday I wrote about an interesting behaviour Ruby exhibits when resolving instance variables in global functions. It turns out I was wrong. A bit.

I said that anything defined in the global scope is actually defined on the object called "main," which is an instance of Object. However, that's actually a little bit incorrect, and as a result, while my analysis of scripts 2 and 3 from that post are correct, the premise that lead me to create them in the first place was flawed.

The real truth is this: any function defined in the global scope is actually defined as a method in the Object class. This means that every object in Ruby inherits the method; which means that whenever it's called from within another method, that object (the one who owns the other method, not "main") becomes the receiver; and because it's defined on the obect (in its inheritance tree) it has access to that object's instance variables. So, in the example I posted yesterday, it rightfully returns the Bar object's @foo, which equals 42.

Because I assume my words are convoluted, here it is represented in code form:

Script 1Script 4

def foo
  @foo
end


class Bar
  def initialize
    @foo = 42
  end
  def bar
    foo
  end
end

p foo
p Bar.new.bar
class Object
  def foo
    @foo
  end
end

class Bar
  def initialize
    @foo = 42
  end
  def bar
    self.foo
  end
end

p foo
p Bar.new.bar

And, of course, the requisite output:

Script 1Script 4
$ ruby script1.rb
nil
42
$ ruby script4.rb
nil
42

The "main" object's foo returns nil, but the Bar object's returns 42.

And now you know.

]]>
Tue, 29 Oct 2013 12:52:33 +1100 e14a129ac6cc82529b38f4e6c0970244
Global Functions in Ruby (2013-10-28) https://matthew.kerwin.net.au/blog/?a=20131028_global_funcs_in_ruby Global Functions in Ruby

Update: I've discovered a flaw in my understanding of Ruby's handling of global functions. View the updated entry for a less incorrect explanation.

I'm a little late for my self-imposed weekly blog posting, but I've just discovered something interesting about Ruby's resolution of instance variables.

Here are three Ruby scripts:

Script 1Script 2Script 3

def foo
  @foo
end


class Bar
  def initialize
    @foo = 42
  end
  def bar
    foo
  end
end

p foo
p Bar.new.bar
$main = self
def $main.foo
  @foo
end


class Bar
  def initialize
    @foo = 42
  end
  def bar
    $main.foo
  end
end

p $main.foo
p Bar.new.bar
$main = class Foo
  def foo
    @foo
  end
end.new

class Bar
  def initialize
    @foo = 42
  end
  def bar
    $main.foo
  end
end

p $main.foo
p Bar.new.bar

What do you suppose they output?

An experience rubyist would recognise that the first call to #foo will return nil in all three. The magic of Ruby ensures a few things:

  1. syntactically @foo can always only be an instance variable, so there's none of that foo if (foo=1) nonsense about resolving methods vs. variables
  2. there's always an object (anything defined in the global scope is actually defined on the object called "main," which is an instance of Object) so there's no problem resolving the instance variable @foo in script 1
  3. any variable which hasn't had a value assigned to it results in nil. Practically, this can only apply to instance, class and global variables, because any "sigil-free" variables have to have been assigned for the parser to recognise them as variables.

The problem interestingness is in the second call. Bar#bar calls #foo in exactly the same way we just did, so one would expect the @foo variable to be bound to the main object in the same way. In scripts 2 and 3 this is in fact the case; the explicit receiver (i.e. the $main.) ensures it. However script 1 is weird.

Because #foo in script 1 is declared as a global method, it is only tenuously bound to the main object. When it is called from inside the scope of a Bar method, Ruby seems to search that Bar object's instance variables before stepping out to main's. So the output of the three scripts is:

Script 1Script 2Script 3
$ ruby script1.rb
nil
42
$ ruby script2.rb
nil
nil
$ ruby script3.rb
nil
nil

And now you know.

]]>
Mon, 28 Oct 2013 12:11:28 +1100 d3c05d9cf99d847a841bebad868b3d41
Weekly Blog, №1 (2013-10-20) https://matthew.kerwin.net.au/blog/?a=20131019_weekly_1 Weekly Blog, №1

The other day I thought it would be good for me to write to a schedule. I'm not entirely sure why I thought that, or precisely how I thought it would be good, but I must have had a reason at the time, so I might as well give it a go. I'm going to try to write a new blog article every week. My rules will be pretty simple: one entry every week (it being Saturday night, I'm sort of scrambling to make my first deadline), and within the broad theme of ... whatever it was I originally envisaged this blog's theme to be. Random geekery, I suppose.

Yesterday I saw an article (via PAR) which was partly a dump on BioShock Infinite, but mostly a rage against the status quo of game reviews. I've not played BioShock (either of them), and I don't care about game reviews, so whatever, but he mentioned Dear Esther as another game he didn't like that was lauded highly. Or greatly. Or however things are lauded.

I've played Dear Esther. I thought it was brilliant. It was a bit weird in the tunnel bit, which went for a lot longer than I felt it probably should have, but the other bits were ... brilliant. That's the only word for it. I've never been to the Hebrides, but I feel like I know what it feels like to be there. Dear Esther is such a beautiful experience, beautiful and terrible and a bit creepy. As a game, though, it's rubbish. Dear Esther isn't a game, it's an art.

And now I feel I need to talk about Half-Life 2. In case you're wondering, Dear Esther started out as a total conversion mod of Half-Life 2, and it still uses the same engine, and when you're playing it, it has the same feeling as a Half-Life 2 game (except that you can't run or jump); the same smoothness and glugginess and fluidity and feel. Half-Life 2 has more colour than Dear Esther. It also has more game elements (that is to say, it has game elements.) Where I felt most let down by Half-Life 2 was the fact that it was the sequel to Half-Life. In Half-Life I was Gordon Freeman, an identifiable but unremarkable, softly-spoken chap whose only real defining attribute was the fact that he had a suit. When things first went bad, all he had was the suit. And then he picked up a crowbar. That thing was like a divine relic, the item that finally gave Gordon and me the power to fight back. No matter what sub-machine guns and rocket launchers and gauss weapons and whatever we picked up later, I always kept the crowbar close. Gordon Freeman was an ordinary guy in a fancy suit with a crowbar.

And then there's Half-Life 2. Suddenly I was Gordon Freeman, legend, icon, nigh invulnerable superman. The first levels had me jumping over rooftops in broad daylight, dodging bullets as I scampered to meet my crew. Not trapped half a mile underground by myself in the dark with mysterious and terrifying screeching alien things, no, jumping over rooftops dodging gunfire. Suddenly Gordon Freeman had become Quake Guy. That broke the experience for me, the moment Gordon switched to Gordon 2. Where before I'd killed headcrabs with the crowbar in frenetic desperation, now I beat my enemies to death with glee and abandon. I plunged head-first into firefights confident in "the Gordon Freeman"'s ability to survive and dominate. Stupid AI in blue uniforms never scared me (hey, I played Wolfenstein, I know how to deal with guys in uniforms), and headcrabs? Hah! I made it out of Black Mesa and now I'm the Gordon Freeman. Hell, there's even a headcrab with no teeth! Headcrabs are nothing, now. I played with my enemies, finding new and inventive ways to kill them. I ran through Ravenholme like, well, like Quake Guy. I twitch-shot everything that moved, and blasted those werewolf zombie things off their silly downpipes, and batted black headcrabs with my crowbar because it was fun, and I laughed. I smashed and slaughtered, and father Grigori had almost as much fun as I did, judging by the way he laughed, and Ravenholme – which I'd heard was meant to be scary – was rather boring. I've played Quake already. The spinny blade traps were kind of neat, though, and a launching sawblades with the gravygun.

At the end of one of the episodes – was it number 2? I forget – there's that stupid, protracted thing with the car and the lumbermills. (This was Half-Life 2, right? I'm not confusing myself with Arathi Basin ..?) Anyway, I'm not sure if you remember the little blue alien tripod things... Hunters? The first time we met one of those for real, in a ruined building, I ran up in typical Quake Guy fashion and killed the Hunter to death with my shot gun. Thus, I never learned that they're meant to be tough or threatening. So when we got to the White Forest level I almost ignored the Hunters, except when it was fun to side-swipe them to death with the car. Few enemies in the game were more than an irritation, a puzzle solved with ammo, that detracted from the storyline. This, to me, was not Half-Life, and it made me a little sad.

When I can afford an upgrade, and start running hardware less than a decade old, I'll install Space Marine. Then I'll be happy to slaughter my way through fields of enemies. Because Space Marine. Not because crowbar.

]]>
Sun, 20 Oct 2013 00:40:44 +1100 f539cfeecfc15b0ac63502c63c5729db
Prahova (2013-01-30) https://matthew.kerwin.net.au/blog/?a=20130130_prahova Prahova

Herein is a summary of the closing events of the campaign of the five heroes who set out from Odenfell to save the kingdom of Prahova.

Jan 17, 2013

My D&D party just decided to set fire to a third of the city in order to trap and destroy an invading army.

We're defending this city. By burning it down.

Thom (my character) wasn't sure; he tossed a coin to decide which way he would go, then put the full force of his charisma and diplomatic skills behind the coin's decision.

So, they're currently describing the best places to put all the oil and thatch.

Jan 25, 2013

Continuing the saga of the burning city...

Here follows my account of the encounter that played out in last night's D&D session.

[Warning: split infinitives abound]

Last night, we resumed our collaborative narrative at the point where the avenger had scouted out the encroaching force, the ranger and his troupe had drawn them into the city (through the fuelled suburbs and up to the wall), and some of the others had ignited the trap. We had taken up position above the gate, with Thom perched regally front and centre on the discarded appropriated throne of Prahova (still wearing the discarded appropriated crown of Prahova).

A heavy darkness, like a fog, settled over the city as a fell, booming voice chanted out of the darkness.

The vanguard of the attacking force, the fastest runners, turned out to be undead abominations reminiscent in their actions of the fast zombies from HL2. As the screams of burning cultists filled our ears and the dry heat of the raging fire burned off the fog, the eyeless, needle-mawed things scurried up the walls, and we five – aided by six townsfolk armed with shortbows – thrust them from the parapets, cutting them down as fast as they could climb.

One of the first to scale the wall leapt at Thom where he sat, and as its savage swiping blow landed, an explosion rocked the throne. Duerim, the dwarf and only other party member in direct line of sight, was sure Thom had exploded, and evaporated the thing along with him. As the red mist and disembodied limbs settled, there was no sign of Thom. Unbeknownst to Duerim, Thom had invoked the power of the chaos below, and teleported both back in time and to another part of the wall; the elemental forces involved and the vaccuum left in his wake were what had exploded the thing.

[I.e. a daily immediate reaction (Slaad's Gambit) critted, and the eye artifact in his crown allowed him to stack extra crit dice on the damage, at the cost of a crit dice roll of damage to himself.]

During a lull in the fighting a great noise was heard rushing through the burning city, and a giant snake burst forth, smashing the gates and entering the courtyard beyond, followed by several squads of cultist soldiers. Everyone turned to fight the force now inside the city courtyard. Shreth (the githzerai avenger) flew down and started attacking the snake with his sword, Thom set some cultists on fire, and Duerim leapt from the wall and tore the cultists to shreds with an unstoppable charge through their ranks before being joined by Gonfei (the githzerai monk).

The snake squirted poison around infecting some of those on the ground, including some reinforcement troops from the city militia. Seeing the potential for the battle to quickly go the wrong way, Thom summoned as much magic as he could through the Eye of Morcar, and sent a giant ball of chaotic energy at the snake. As it struck it flashed blue and icicles formed on the snake, dragging it to the ground and pinning it there, and a sustained chain of energy lashed out from the eye, slamming wave after wave of energy into the creature. As blood began pouring from various rents in the snake's body the chain vanished and Thom slumped to the ground with blood running from his nose and ears.

[I.e. a daily attack (Chromatic Orb) critted, and this time I did the stacked damage bonus many times, adding 1d12 damage to the snake and taking 1d12 damage over and over until Thom dropped below 12hp. The first time, with the zombie, I dropped 7 of my 55 max hp; against the snake Thom got down to 5hp before I stopped. I think I dealt about 107 damage to it in total, from that single attack.]

Then Thom blacked out [in the real world John's connection dropped out] and when he came to, much had happened. Gumbar (the half-orc ranger) had fired his "special" arrow (earlier on Gonfei had scrounged some glue and used it to bind his precious hoard of black powder to an arrow) but missed horribly [a natural 1 on the attack roll]. Fortunately the arrow embedded in a wall without exploding. Shreth and the others had pushed the attack on the snake, and killed it.

Everyone sat down to recover their breath and bind their wounds, but after a couple of minutes an immense explosion sounded from the inner city behind them, like a sustained peal of thunder. A massive bolt of lightning seemed to have lanced the great square a few blocks away. They quickly jogged towards the light, and as they rounded a corner they saw a looming figure, a seven foot tall ebon-skinned being at the centre of the conflagration of light and sound filling the square. Gumbar let fly his (recovered) arrow from forty feet, and it ignited as it passed through the lightning, exploding right in front of the figure and knocking it to the ground. It climbed up to one knee and turned towards the party, half its face torn away by the explosion, and spoke. Its voice matched that which had summoned the fog, and it said ... er, something portentous. I forget. Then it jerked upright, as though tugged by strings bound to its chest, arms hanging limp (and mangled) beside it. The lightning narrowed, focussing on the figure, tearing up the ground as it retracted revealing a buried ziggurat beneath the square. When the lightning bolt was barely wide enough to encompass the figure, its concentrated light blinding everyone nearby, it exploded. In place of the dark figured there hovered a great ball of distorted flesh, with many eyes waving on tentacle stalks, and a great central eye centred above a wide, toothy maw.

[close-up of heroes' faces, and snap cut to black]

Update

Bard: I believe the big bad tried to convince our party to abandon their soft city-folk ways for a life on the edge, full of excitement, thrills and uncertainty? I guess his intel wasn't the best. Offering wealth, power, beer and the chance to burninate everything might have worked.

I guess your talent for cutting tempting deals with your enemies falls by the wayside when you spend all your time managing a conspiracy, building an army to subjugate a major city, reviving a lost religion and learning to transform yourself into a giant terrifying abomination.

Jan 26, 2013

Last night, on Matty's D&D...

We ran a second session last night, because we were very close to a resolution on the current arc, and because Ciaran is leaving the country for a few weeks.

In the last episode we left our heroes facing a floating eyeball aboniation hovering above the top of a ziggurat which had emerged from beneath a large plaza in the city of Prahova.

Shreth found an unpleasant pillar, which Duerim knocked over. There were lots of laser lights. Duerim got turned to stone for a bit, but he got better. Thom dumped exactly 100% of his HP into another critical hit stack attack, dealing about 74 damage to the beholder; he got better next round thanks to his Die Hard feat. Gonfei got zapped by a necrotic dissolving ray, and failed 9 consecutive saves to stop the ongoing 10 damage. His one positive action all fight was to grant Thom an extra healing surge immediately after he'd recovered from the suicidal attack. Then Gonfei fainted, got better, fainted again, got given a potion, fainted again, etc. for a few rounds, even after we'd finished off the beholder. Thom landed the final blow, zapping the beholder with an arbitrary magic beam.

Then we dispersed, Thom to find his throne and set it in place atop the ziggurat, the others to help put out the fires spreading through the city.

Next day, after a sleep in the cloisters of a friendly temple of Bahamut, Thom was summoned to the home of some nobleman who had convened the city council. The whole party followed. The nobleman, Lord Gorhar, made a grand proclamation, proposing Thom as Lord Protector of Prahova. Much argument and debate ensued, in part revolving around the fact that it was Thom who'd convinced the council to fire the northern third of the city the previous day. Thom made a grand speech, about having lead the people in battle, now leading the people to rebuild Prahova, which swayed part of the council. A bit later Duerim got bored, and declared loudly that the council could sit here debating as long as they wanted, he was going out to help fight the fires and rebuild what he could. Mickael, a town engineer and leader of the rebels who had originally organised the revolt we'd used to accidentally dethrone young King Radost, spoke out against Thom, something about being upset that he (Mickael) had organised a fighting force of the city's populace, but they'd barely had a chance to defend the city on their own terms before the outlanders and their crazy fire plan was enacted. Gumbar got miffed at that, pulled out one of the giant serpent's teeth (a souvenir from the night before) and drove it into the table, asking how well the farmers and innkeepers would have done standing before the serpent, then stormed out. Thom said something else about how the townspeople had quailed before the snake, and would have fled if Gonfei hadn't rallied them and ordered their movements and driven them to pin the snake with their pikes. At some point Mickael said that some of the zombies had entered the city by roundabout ways, and his men had done well enough fighting them off; in fact there were reports of some zombies still infesting the castle. At that, Thom stood up and said that whatever the council decided it was to be their decision, not his, and he was off to cleanse the castle.

He and the two gith went to the castle, stalking its eerily silent halls, until they startled a pair of guardsmen. Together they found a finished off a zombie or two, and eventually made their way to the throne room, meeting several more squads on the way. The guards reported that they'd scoured the entire castle and it was finally free of the abominations. Since they looked up to Thom, and since he was by this point quite used to playing the leader, he praised their bravery and their efforts, and gave them a half-day off to be with their loved ones, or do whatever they would.

Then he found a discarded chair, dragged it to the old throne's dais, and sat down on it, brooding over what had happened, and how much he'd grown and changed in the past few months, and what would happen with the council.

Seeing him in his chair on the dais, and feeling their part in this story was now complete, the gith shared a look, turned around, and walked out, leaving these lands of men.

Footnote

After doing what he considered enough to help the citizens of Prahova stabilise and start to reestablish themselves, Gumbar went on his own way, continuining his mission to seek the scattered half-orc people of the world and unite them once more under a single banner.

I believe Duerim hung around the city, finding a sense of purpose he'd never felt before, garnering a following of like-minded young men who established themselves as a semi-official gang of heavy drinking, oddly spiritualistic militiamen.

]]>
Wed, 30 Jan 2013 12:19:51 +1100 eb07417d2dd10dc18b081f82f5f22f40
Attn: Perl Programmers (2013-01-23) https://matthew.kerwin.net.au/blog/?a=20130123_attn_perl_programmers Attention Perl programmers!

The following, while valid, is not “good code”.

Yes, you can fit it all in one SLOC, and you've got ternary operators and magic variables and conditional execution all jammed in there at the same time, well done; but it's bad code. Do not do it.

    defined $uri or $uri = !$prefix || $prefix eq $self->envprefix
    # still in doubts what should namespace be in this case
    # but will keep it like this for now and be compatible with our server
        ? ( $method_is_data
            && $^W
            && warn("URI is not provided as an attribute for method ($method)\n"),
            ''
            )
        : die "Can't find namespace for method ($prefix:$method)\n";

Note the following code sample. Yes, it has many more lines of code, and yes there is more typing, but note too how it's easier for a human maintainer to scan, parse, and grok?

    if (!defined $uri)
    {
        if (!$prefix || $prefix eq $self->envprefix)
        {
            # still in doubts what should namespace be in this case
            # but will keep it like this for now and be compatible with our server
            if ($method_is_data)
            {
                $^W && warn("URI is not provided as an attribute for method ($method)\n");
            }
            $uri = '';
        }
        else
        {
            die "Can't find namespace for method ($prefix:$method)\n";
        }
    }

I know that maintainability is a swear-word in Perl circles, but honestly people, I think it's time for a paradigm shift.

]]>
Wed, 23 Jan 2013 14:52:24 +1100 fe539796d5458d1aa1a62774ce935392
Spelling Pedants (2013-01-07) https://matthew.kerwin.net.au/blog/?a=20130107_spelling_pedants Someone calling themself a professional C programmer, writing to a software development mailing list, used the word "parenthesizes" for "parentheses".

It could just be a typo (I know that sometimes my fingers see certain letter-combinations as triggers to write other words – but I usually notice that "integer" is out of place when I meant to write "interesting") however I find it unlikely. There's a whole extra syllable in there, and it feels like the kind of word someone would invent if they wanted to pluralise a word that they didn't quite understand.

Which brings me to my point: are all "good" code monkeys grammar nazis and spelling pedants, and the type of folk who would look up a word they weren't sure about before posting it to a mailing list of their peers? Or is that just me?

]]>
Mon, 07 Jan 2013 09:57:07 +1100 c9d5833b8e8463ada9be77aab31e8f64
PHP Ternary Precedence (2013-01-03) https://matthew.kerwin.net.au/blog/?a=20130103_php_ternary PHP Ternary Precedence

Take this statement:

true ? 1 : false ? 2 : 3

In ruby, C, Java, etc. it evaluates to 1.
In PHP it evaluates to 2.

Apparently PHP evaluates inside-out:

(true ? 1 : false) ? 2 : 3
=> (1) ? 2 : 3
=> 2

...where everything else evaluates left-to-right:

true ? 1 : (false ? 2 : 3)
=> 1

Interesting.

]]>
Thu, 03 Jan 2013 14:16:56 +1100 dc9f499cb734706f754e3d68417c3495
Dimensionality of Web Design for Mobile Devices (2013-01-02) https://matthew.kerwin.net.au/blog/?a=20130102_mobile_web_dimensions .dimension{font-weight:bold}.dimension::after{content:':';margin:0 3px 0 0}section.concern{margin:1em 0;padding-left:4px;border-left:12px solid #ccc}section.concern .description{font-weight:bold;background:#ccc;margin-left:-4px;padding-left:4px}section.strategy{font-style:italic}.key-term{font-weight:bold}q::before{content:'\00AB'}q::after{content:'\00BB'}q{font-style:italic}

On the Dimensionality of Designing the Web for Mobile Devices

Here are some thoughts I've been cultivating for the past year about developing websites and web applications in a mobile world.

I am not going to talk about whether or why we should develop mobile-friendly websites; instead I will assume that that is what you are doing, and enumerate some things to consider as you do.

I chose the word “dimensionality” for the title because, as I visualise it, the domain of developing mobile-friendly websites has several different classes of consideration which are entirely orthogonal. It is quite easy to find articles on the web that talk about detecting mobile devices, or presenting alternative layouts, or simplifying navigation, or making click-targets larger, etc. however very few of those articles seem to be mindful of the fact that they only address one class of mobile-friendly design considerations; or worse, that some seem to unwittingly conflate the classes, applying a solution from one issue to an entirely different class of problem.

Well, I'm going to set the record straight!

How Many Dimensions?

I may have been a bit harsh when I said that the issues are entirely orthogonal. They aren't. As with all things, there are gradients and overlaps and fuzzy edges. I could devise a hierarchical ontology classifying the classes of issues, but I think that would be a waste of time. Instead I'll make some arbitrary headings, and clump things together under them.

And here they are:

  1. device size the most obvious class of concern: how big is the screen, and how much info can we cram in there?
  2. device capability actually a generalisation of size; what browsers and standards does the device support?
  3. human interface is there a mouse? Keyboard? Touch screen? Multitouch? Drag&drop, swipe, pinch, etc.?
  4. use case how is the user going to use this service?

Device Size

Different devices have different sizes. They have different numbers of inches per screen, and different numbers of pixels per inch, and different numbers of pixels per view angle. This class of consideration covers all these issues, and is widely discussed on the web. I will mention some of the key concerns, and possible strategies for addressing them.

Small screens can't fit margins.

On a typical desktop computer (let's say anything larger than 1024×768px) there's room for a sidebar on the left with your site's main navigation, breakout boxes on the right, and a sticky banner across the top with easy-access buttons for common actions, all without overly crowding the all important content you are delivering. On a QVGA device (320×240px) you would be flat out fitting a heading and three lines of text.

Never fear, solutions abound! My advice would be:

  1. consider the devices that will be used to view the site, ordered by frequency and/or how much you care about them
  2. simultaneously, consider the site's navigation; what pages exist, how they are navigated between, how much access the user needs to have at any given moment
  3. sketch out some designs specifically suited for the top few devices (and maybe an outlier, if you want to cover more bases) that address the site's navigation needs
  4. try and come up with a responsive design that incorporates all your designs, or at least their key features

It's up to you whether your baseline design targets the full-screen desktop environment (with graceful degradation for smaller screens), or the other way around (mobile-first), based on how you prioritise the devices and how hard you think it will be to implement one over the other.

Small screens flow differently.

This is almost exactly the same as the previous concern: you can't fit as many lines of text on a smaller screen, those lines wrap sooner, and boxes take up more (relative) space.

The solution is the same as above: work out which devices you need to support, work out some designs to suit those devices, then work out a responsive design that incorporates them all.

Small screens need small images.

There is a phrase floating around the web recently: art direction. I think the Cloud Four Blog expresses it best. Simply put: for a small screen, one can either downscale pictures or crop them. This is a choice you as the designer have to make, and you may have to lean on the standards a bit to achieve your design.

As a general rule, my implemented solutions for this class of concern are all client-side. I make all the resources available to all browsers and devices, provide hints about which resources are intended for what context, and let the browser choose. I may call it “responsive design” and “CSS rules,” but essentially that's what I'm doing.

Device Capability

Along with screen size, different devices are capable of doing different things. Some have more powerful processors, or more memory, or access to higher bandwidth, or the ability to display more colours. And further, on any given device there may be a choice of different software packages (read: browsers) which have different capabilies such as standards compliance, scripting support, even differences in how they route bits of data from the web to the screen.

The big issue that springs to mind is:

Not all browsers have javascript.

I could replace “javascript” with just about anything and it would still be true. And, in fact, my approach to dealing with it would be the same.

  1. consider the lowest common denominator about which you care
  2. work out a strategy for supporting the relevant browsers; for example
    • using <noscript> fallbacks, to ensure minimal functionality is maintained when the scripts don't run; or
    • designing bottom-up, creating the site initially with no scripting at all, then adding script-based enhancements
    • anything else you can think of; for example on this site (look at it in a very narrow window) I created a whole separate page which exactly duplicates the slide-down navigation menu, linked from the menu button. If the browser supports my scripts, the menu will slide down when the button is clicked; if not, clicking the button will take the user to the menu page.

Ditto Flash, CSS, webfonts, X image format, etc.

This class of concern can be solved by either (or both of) client-side and server-side logic. If I know that, for example, iOS devices don't support Flash objects, I can detect them from the server and deliver an alternative version of the document (server-side). Or I could assume the worst of everyone, and simultaneously send the Flash object, and instructions for what to do if/when it doesn't work (client-side).

Human Interface

This isn't about UX per se; in this section I'm specifically referring to the physical interface, and how the human user interacts with the device.

How do users scroll?

This is one aspect I've come acrss which rarely gets a mention. If the device in question uses windows and scrollbars, you can put fixed-sized elements with over-sized contents all over the shop and let people scroll to their merry hearts' content. (Whether or not this constitutes good design is a different question.) However if the device uses viewports things can be a little different.

While designing a site one must keep in mind that, for example, iPhone users might find it difficult to scroll a box in the middle of a page, and so might like an alternative presentation (although preferably not one quite so bland).

Can they hover?

With a mouse it's quite easy to move the cursor over an element, and have this trigger an action; or give focus to an element and send it a hardware action (e.g. scroll) without activating the “click” mechanism. On a touch-screen phone that can be a lot trickier.

While designing a site one should consider whether or not to include things like hover menus, and if one does, whether and how to support devices that don't do hover.

Can they type?

Directly related to the previous concern is keyboard shortcuts; simple on a desktop computer, impossible on my Android phone. Once again it's up to the designer to decide whether or not to use them, and how much to rely on them.

This class of concern I usually solve server-side. Since the issues usually involve a rethinking of the whole site interface, I'll usually design two completely different versions of the site, and use server-side client detection to choose which to deliver.

Use Case

There's a VentureBeat article I read a few months ago which originally solidified in my mind this idea of dimensionality. It describes some of LinkedIn's approach to web design, specifically the relationship between mobile and desktop devices. Now it might be sensationalist (or otherwise poor) journalism for said article to use “Responsive design” just doesn’t work as a heading, but it's definitely wrong; I interpreted Kiran Prasad's statement as: responsive design is a tool for presenting the same content in the same use case on different devices.

Removing the editorial insertions from the quote, Prasad said, “We're looking at the ‘entrenched’ use case, the coffee-and-couch use case, the two-minute use case.” Where VentureBeat went wrong was equating use cases with devices; as though it's impossible to want to get a two minute summary while sitting at a desktop, or that users will never want to get some serious work done on their iPad. The thought I had at the time, and am professing now, is: each use case represents a different product.

To use the LinkedIn example, the “entrenched product” would provide the full functionality of the site, including all the administration tasks and that sort of thing. The “coffee-and-couch product” might let you play with your profile, search for contacts, make recommendations, etc. The “two-minute product” might just present a summary of news articles and updates from your groups and contacts. In an ideal world, all three products would be usable on any device, using responsive design or whatever equivalent technique is appropriate to make that level of content work on the given device. The big advantage is that this gives the user the power to decide which product they want to use.

I really want to emphases that last sentence, but it would be a bit naff. It's a really important point, though. If you see that I'm accessing your site from my phone it might be reasonable for you to assume that I'm here for a two-minute update, and initially serve me the appropriate product, but I might have a specific task in mind that is only possible using the entrenched product, so would really like the ability to switch to it and still have it work on my phone.

My strategy here, and my suggestion to everyone else, is:

  1. determine the use cases; work out who is going to be using your site to do what and how
  2. design a product for each use case; they might be lighter or heavier versions of the same content, or provide a subset of the functionality, or they might be completely different sites (who knows?)
  3. treat each product as a different site, and develop each one for all appropriate devices, as you would any stand-alone website

That final point is, I think, the main point I'm trying to make with this entire post. There is a difference between the device I'm using, and what I want to do on your site.

So in summary: be mindful of the devices people will use to access your site, be mindful of what various devices are capable of doing, and be mindful of what people will want to accomplish while visiting your site. Go forth and be productive!

]]>
Wed, 02 Jan 2013 13:59:05 +1100 986550e82c4821dcbb6f841622d33bec
PHP: references and loops (2012-06-13) https://matthew.kerwin.net.au/blog/?a=20120613_php_gotchas_1 PHP: References and Loops

The PHP language has a couple of quirky behaviours which can catch you out unexpectedly if you don't know what to look for. Here are two I've discovered recently:

1. all variables that contain instantiated objects are references to that object, and explicit "references" actually refer to ... er, the variable (symbol) table, or something ..?

The only way I can explain is by means of example:

<?php
class SimpleClass {
     function __construct() {
           $this->var = '';
     }
}

$instance = new SimpleClass(); // 
$assigned = $instance;         // copy assignment
$reference =& $instance;       // reference assignment

$instance->var = 'asdasd';     // change the field in the original instance
$reference = null;             // make the 'reference' variable point to nothing

var_dump($reference);          // => NULL, as expected
var_dump($assigned);           // => object(SimpleClass)#1 (1) { ["var"] => string(6) "asdasd" }
var_dump($instance);           // => NULL

Two things to take away:

  1. $assigned = $instance creates a second variable which refers to the original object; and
  2. $reference =& $instance essentially means that $reference is now an alias for $instance, and anything you do to one also happens to the other. I don't know any other language that has these sorts of references.

If you change $reference = null to $instance = null the final output is the same. $reference and $instance are the same variable (albeit spelled differently.)

2. foreach maintains a reference to an array, even if you overwrite the array variable inside the loop.

<?php
$foo = array(1,2,3);
foreach ($foo as $x) {
	echo $x;
	$foo = array(4,5,6);
}
var_dump($foo);

...which outputs:

1
2
3
array(3) {
  [0] => int(4)
  [1] => int(5)
  [2] => int(6)
}

While you're inside the loop, the iterator always refers to the object originally identified by the variable.

For a more potentially destructive example, we can overwrite $foo with something that can't be foreached over:

<?php
$foo = array(1,2,3);
foreach ($foo as $x) {
	echo $x;
	$foo = $x;
}
var_dump($foo);

...which outputs:

1
2
3
int(3)

And now for something that will hopefully make you cringe:

<?php
$foo = array(1,2,3);
foreach ($foo as $foo) {
	echo $foo;
}
var_dump($foo);

Yes, apparently this is valid, even though we're overwriting the array with its own members in a horribly obfuscating and mind-hurting way. The output is:

1
2
3
int(3)

...which I suppose makes sense, once you realise that the foreach loop seems to have created a safe reference to the original $foo array. Note that the second $foo variable doesn't mask or shadow the outer-scope's foo, it overwrites it, just as if you did the assignment inside the loop body.

]]>
Wed, 13 Jun 2012 17:51:20 +1000 64cb900a603e450eeb36d05d7ff35cf4
tally-ho (2012-03-16) https://matthew.kerwin.net.au/blog/?a=20120316_tally_ho Tally-Ho

I have inadvertently invented a ternary tallying number system.

Our drinks fridge at work sells cans of soft-drink for $1.50 and small chocolates for $0.50.  One day I didn't have change, so I wrote my name on a sticky-note and put a vertical stroke beside it – signifying one drink.

Then, at another time, I wanted a drink and a chocolate, so I drew a short stroke beside another regular-sized one. Thus the tally read: two big items, one small.

Then I wanted another chocolate but didn't have change, so I added a small stroke above the existing one, with a gap between. Thus two small items beside my two big ones.

The revelation came when I wanted a third chocolate. It occurred to me that three chocolates is the same price as a single drink, so I simply filled the gap between the small strokes. And so my ternary tallying number system was born.

1 =╷
2 = ¦
3 = |

There is ambiguity, of course, because there's no distinction between an isolated 1 and 3, however since there's never more than a single 1-digit in the tally, the ambiguity is quickly resolved.

You could include a parity mark if you wanted to indicate the height of a 3-digit; or similarly you could add something to the 3-digit (for example a small slash: ∤ ), but personally I can't be bothered.

]]>
Fri, 16 Mar 2012 13:11:00 +1100 eb58583cc4922f3f569baa646c911db3
The End is Nigh (2012-01-31) https://matthew.kerwin.net.au/blog/?a=20120130_end_is_nigh The End is Nigh

It seems that the world as we know it is about to end, but most of us haven't noticed. I guess it's not the kind of thing you usually want to go noticing.

The "world" I'm talking about isn't the actual world (I'm pretty sure Earth will be around for a while yet, and I'm quite confident people will be scumming up the surface while simultaneously inhabiting the little worlds inside their heads for at least a bit longer too); I'm not prophesying some pseudo-Mayan-apocalypse scenario, or fireballs, or gods and angels stepping down from on high to mess with us. But a lot of the little details of our daily lives seem to be teetering on some sort of edge, and they're being inexorably pushed over.

And the internet has done it.

Scott Kurtz, who does PVPonline, has a lot to say about what it means to be a webcomic guy, and how the "old boys" of traditional print comics and syndication view him and his peers, and how they just Don't Get It™. The internet is a thing that they refuse to acknowledge, and therefore they have no idea that they should be dealing with it, let alone how. For quite a while I wondered at his passion (and, dare I say it, vitriol) on the topic – I understand it's his medium and his livelihood, but it did seem like he was taking things overly seriously. Now I feel myself definitely coming on board.

According to a TechDirt editorial, MPAA's number two admits that that industry is "not comfortable" with the Internet. We all remember the SOPA/PIPA thing right? After all, it only happened a week ago. Independent online filmmakers – let me point at, say, Felicia Day as an arbitrary example – are finding out that there are ways to build success on the internet, but Hollywood just Doesn't Get It™.
By the way, you should read that linked article. It's not very long, and it sums up quite nicely some of what I'm trying to evoke here. It's all right, this page will be here when you get back.

And now I'm hearing where my friend Katharine Kerr the author is gloomy because it's hard to sell books in today's socio-economo-whatevery climate. It would appear that the traditional publishing monolith isn't quite cutting it; and yet I know, or know of, a good many folks who are quite successful at independently publishing materials and distributing them using the magical powers of the internet. These vary from hobbyists to RPG creators to professional novel-writing authors; using tools like kickstarter to raise capital, distributing their materials through amazon and the iBooks store and whatever other channels they can find, garnering community support, crowd-funding and crowd-sourcing their success.

Kit's cool, she gets it; but there's not a great deal she can do about it. The whole concept of breaking away would be a huge gamble for her, since her books are her entire livelihood, and if it all were to go pear shaped the consequences for her could be quite devastating.

And she can't change the system; she is just an author. The edifice that is Publishing doesn't care for or about authors, the same way the music industry doesn't care for or about its artists. Sure, there are cool "gets it" people like Trent Reznor who make music but aren't a part of The Music Industry; but to be frank Trent's a crazy artist who would make his music even if he couldn't afford to eat. That kind of person will never be brought down by the Machine.

What I was trying to say, before I got all rambly, was that the internet seems to be providing a reach and a voice for people as individuals that hasn't been possible before. And as a result it's breaking down all the great conglomerates, which are quite big and far-reaching at the moment, and allowing people to work by and for themselves and to succeed at it.

I know it will have an effect on our day-to-day lives, I can only wonder what that effect will be.

And I hope Kit gets a chance to publish a book by herself, and that it's a huge success.

]]>
Tue, 31 Jan 2012 00:15:00 +1100 3111cf1b8e202237172deb2a9f146c4c
D&D Next, or: WotC ate my 4E (2012-01-13) https://matthew.kerwin.net.au/blog/?a=20120110_dndnext D&D Next
or
WotC ate my 4E

This is the email discussion my D&D group has been having this morning.


Gonfei:

Thought I would post this hear in case anyone hasn't seen it.

http://www.wizards.com/dnd/Article.aspx?x=dnd/4ll/20120109

5th edition is on the way, thoughts?


DM:

Hmm that seemed a bit quick but I guess 5 years is the same amount of time between 3.5 and 4.

I don't know... I really like reading about game theory, I find it all interesting. Particuarly articles by Mike Mearls, he's a clever guy, he's been playing rpgs for 30 years and still seems to think they can be made better and better. Which is a great thing really.

But it's so hard not to react cynically when you consider the money people have spent getting even the basics of 2 DM guides, 3 PH books and 3 MMs.

So yeah I think I'll have fun following its development from the sidelines while trying not to think about the potential influence that massive publishing companies may have had on this decision.

In a way I guess that new editions are always a positive thing? It's not like anyone ever breaks into our houses and destroys all our previous books and adventures, we can always continue with them. An influx of new ideas is welcome.


Thom (me):

The zeitgeist seems to be that the system will probably be fine and dandy, Wizards dropped the ball with 4E by failing to support it properly even though the system itself works well as intended, and no one wants to face another damned version war; the community's already fractured enough as it is.


Korgul (R.I.P):

I am surprised that WotC waited 5years for this. I heard an interview with some guy who worked on DnD at TSR & WotC from 3 - 4 and he said the release structure is what bankrupted TSR. WotC ability to redesign and therefore resell DnD is what made it profitable.


Thom:

They didn't exactly wait. There was the trickled release of "core" 4E books, then there was essentials which is basically 4.5. They've managed to keep it almost fresh, and halved the usual product cycle time for a D&D release.


Gonfei:

Monte Cook, one of the lead designers of 3rd edition has been re-hired to help with 5e. I really hope WotC don't plan on reverting back to 3.5 style D&D just because Pathfinder has been doing really well in the last 6 months.


Thom:

I think there's actually a big chance of that. The version wars started because "everyone" "loved" 3/3.5, then 4E "floundered" while Pathfinder "flourished", and now 5E is apparently going to be highly community-driven, which means the vocal *ority (the ones who started the version war, and boycotted 4E, and play Pathfinder) will get a lot of airtime.

I personally hope they don't do that. Pathfinder already does. I want something like 4E, but maybe a little more crunchy.

Note: I didn't mean "crunchy as opposed to fluffy", I meant "crunchy as opposed to smooth" i.e. over-balanced and uniform, like 4E. I adore fluff.


DM:

Actually come to think of it I'm really interested to see which areas they decide to redesign. Like Thom said there's no part of 4th edition that you'd really say are broken, it all works fine. It just depends on which areas they decide can be more fun or more creative than they currently are.

Personally I'd be happy with the same basic system with a revised system for damage, wounding, health and healing. And a different take on minions. And I'd also like for skills to be a much bigger focus for the game.

Maybe they could use a skill tree system? As you progress in levels you choose areas within your skills to focus on and gain new actions/powers?

But I guess they'll probably be going for more of an overhaul feel than just tweaking. Or maybe not?


Gonfei:

Nobody really knows anything at this point, it's all just speculation. I am worried about how much they will listen to the community though, too many people have "good ideas" that they really haven't thought through. And like thom said, often times the most vocal people are the ones who are complaining. I really hope the designers can differentiate between what people say they want, what they are actually looking for, and people who just want their favorite game to have the D&D logo on it.


Korgul:

It sounds like they want a rules system that pleases everybody, but that means it will please nobody

Whatever happens I will buy the book when it comes out as I am sure many other people will


Thom:

BookS.


DM:

Yeah that would be the biggest shame, if they try and emulate a product that already exists, purely for the sake of competition. Surely the community would much rather a variety of systems than every game trying to be the same thing?

I guess that's the thing about community engagement though; they'll get a whole lot of conflicting opinions about the game to sort through, mostly from the kind of vocal people that *know* that everyone else is playing games wrong, and must be saved.

And those opinions aren't really about making a new style of game, those gamers probably plan on continuing to play Pathfinder anyway, so any advice that comes from that group will only really distort the ideas that everyone else has for a game that would be truely new

...

Pretty much what Gonfei said 20 minutes ago, I had that email ready to send then got distracted by birthday cupcakes.


⟨ off-topic for a bit ⟩


DM:

[...] does anyone else get the feeling that soon were going to be hearing a lot of very familiar promises about a new era of digital tools in d&d?

"5th edition will even include a revolutionary digital tabletop that will let you design dungeons and play with people over the internet using 3d rendered maps and creatures"..... ORLY?

...

That was an imaginary quote by the way, I haven't heard anyone actually say that yet, but I have a feeling it's coming soon.


Korgul:

There are 3rd party tools that provide that kind of experience already. I undetstand that remote play in a virtual table top is quite popular in the US. WotC would want to cash in on that.


DM:

Yeah other people have put out similar tools lately, the joke is that these features were all promises of what 4th edition would include when it launched. They never eventualised, people stopped asking after a while and I have a sneaking suspiscion that we'll now get the same set of tools readvertised as a shiny new feature of 5th ed.

It'll be good to have all those different options at last, but yeah it would be a bit rich if they sell 5th ed on products that people were expecting to get when they bought their first subscription to d&d insider.


Shreth:

I think my thoughts have already been said by others so here:
http://www.gamesradar.com/randy-dragon/
Skyrim Mod video.


Korgul:

I like shreths comment best.

The Wwebs will be thick with flame wars for the next year on this issue. Then full of bitter gamers longing for there beloved 4e


I don't know that it adds much to the discussion, except that we're discussing it (and I suppose that's yet more publicity for WotC). I guess we, like everyone else, are just waiting for more news, and are keen to see what plays out in the end.

... Matty /<

]]>
Fri, 13 Jan 2012 17:28:38 +1100 9c57ae8eb1b6236a859ba3648bec2840
Maths of The 12 Days of X-mas (2011-12-06) https://matthew.kerwin.net.au/blog/?a=20111206_twelve_days

The Maths of The Twelve Days of Christmas

I know this has been discussed many times over the course of the years, but I'm adding my bit, just to muddy the already opaque waters that tiny bit more.

On the first day of Christmas my true love gave to me a partridge in a pear tree.

Er, sure...

On the second day of Christmas my true love gave to me two turtle doves and a partridge in a pear tree.

Wait, "and a partridge.."?

On the third day of Christmas my true love gave to me three french hens, two [more] turtle doves and [another] partridge in a pear tree.

What's with all the partridges in pear trees?

By now I'm sure you've realised that my true love didn't just give me:

  • 12 drummers drumming,
  • 11 pipers piping,
  • 10 lords a-leaping,
  • 9 ladies dancing,
  • 8 maids a-milking,
  • 7 swans a-swimming,
  • 6 geese a-laying,
  • 5 go-old rings,
  • 4 colly birds,
  • 3 french hens,
  • 2 turtle doves, and
  • a partridge in a pear tree.

In fact that's what she gave me on just the last day. To work out how much crap she actually gave me over the course of the twelve days we can resort to Pascal's Triangle. First we draw out the triangle with thirteen or so iterations, then we highlight various diagonals, and revel in the awesomeness.

The second (purple) diagonal gives us the day of Christmas. The third (green) diagonal tells us how many gifts we received on that day. The fourth (blue) diagonal shows how many gifts we've received in total up to that day. Thus we can see that after all twelve days are up, I've received a total of 364 stupid, stupid gifts, such as jumping noblemen, bagpipe players and bloody trees with birds in and whatnot.

All that's missing is the final count of how many of each stupid gift I was given. This can be illustrated in tabular form:

Stupid Gift Number per Day Number of Days Total
partridges in pear trees 1 1..12 ⇒ 12 12
turtle doves 2 2..12 ⇒ 11 22
french hens 3 3..12 ⇒ 10 30
colly birds 4 4..12 ⇒ 9 36
gold rings 5 5..12 ⇒ 8 40
geese a-laying 6 6..12 ⇒ 7 42
swans a-swimming 7 7..12 ⇒ 6 42
maids a-milking 8 8..12 ⇒ 5 40
ladies dancing 9 9..12 ⇒ 4 36
lords a-leaping 10 10..12 ⇒ 3 30
pipers piping 11 11..12 ⇒ 2 22
drummers drumming 12 12..12 ⇒ 1 12

WARNING: what follows is my train of thought, and may not necessarily be instructive, interesting, or even correct. Proceed at your peril.

Those green numbers are actually the first twelve triangular numbers. That is, if you draw a triangle with one dot on the first row, two on the second, and so on, the nth triangular number is the number of dots up to and including the nth row.

Mathematically this can be written as: Δ n = 1+2+3++n = k=1 n k

The blue numbers are the first twelve tetrahedral numbers. Those are what you get if you make a tetrahedron (like a pyramid, but with three faces instead of four), where each layer is made of the corresponding triangular number.

T n = Δ1 + Δ2 + Δ3 + + Δn = j=1 n k=1 j k

From the table, the formula for how many of the gift I was first given on the jth day, up to the nth, is: j × j .. n = j × n + 1 - j

So an alternative way to work out how many gifts I was given in total up to the nth day is: j=1 n j × n + 1 - j

...which, since it's calculating the same number as the tetrahedal formula above, implies that: j=1 n k=1 j k = j=1 n j × n + 1 - j

Since Gauss showed that: k=1 j k = j j+1 2

...this all suggests that: j=1 n j j+1 2 = j=1 n j × n + 1 - j

I'd like to believe that means that: j j+1 2 = j × n + 1 - j but I'm not going to attempt to approach that proof at twenty past midnight. Also I'm sure there's something to do with a modulus there [ j × j ' where j ' = modulus - j , maybe?] but I'll leave it for another day to work out what it all means. If you can explain it in a reasonably clear and meaningful way, please leave a comment below. Our gratitude will be a wonderful reward.

PS. it's taken me 4 hours (±5 minutes) to write this. If only it was something constructive.

]]>
Tue, 06 Dec 2011 01:29:00 +1100 ef5cc6f29557edfd19716c1c2eddb3dc
Gamma World: A Memoir (2011-09-03) https://matthew.kerwin.net.au/blog/?a=20110903_gamma_world Gamma World: A Memoir of My First Experience

Two players – housemates of the DM – already had characters, so the first ten minutes or so involved the DM walking one of the other players through his character creation. I was only half listening because I was eating my dinner at the table (being host means I'm allowed to run a little late sometimes), but some of the numbers seemed pretty... extreme, to my 4E-attuned senses. Then it was my turn.

I rolled my 2d20 to see just what my character would be, and got a boring combination – both origins were already at the table. I rolled again, and came up Empath and Plant. Of course, I named my empath plant "Harvey Triffid." A plant!

I kitted Harvey out in a set of leathers – very symbolic, a plant wearing animal flesh – and spent far too long wondering what sort of weapons he should carry. I knew he was aggressive, so it was two-handed all the way, but what would a plant wield? Then I realised I could just give him a double-barrelled shot gun, which he could use to blast from a distance, and smash point-blank.

Then out came the d10s (four, this time.) Turns out Harvey likes transport – a pick-up truck, a riding horse, a canoe, and night-vision goggles which are so useful for a plant..?

It turns out, rolling dice for just about everything makes character creation way more fun, and the game is set up that it doesn't matter if your guy is nothing like you imagined when you set out. A little more basic page-filling-outing, and we were ready to go.

The four of us (an electro-gravity guy, a stupid high-speed cat, a burningman, and Harvey) were in "A Village" which had been attacked by amusingly ineffectual robots. We four wandered up the mountains to see if we could find the source of the robots. Near the top of the mountain a bunch of mutant badger men and mutant pig men stopped us and said some nasty things. When things degenerated, Harvey attacked (I rolled great initiative), and for the first time in my table-top gaming life, I had my character run around the field, using cover, alternating between ranged and melee attacks, and doing the sorts of things I do in other game genres.

Gamma World has an interesting (and I think, brilliant) mechanic, whereby if you fire a single shot from your ranged weapon in an encounter it doesn't use any ammo, but if you fire a second shot you are in "all-out" mode, and at the end of the encounter your ammo is gone. This introduces an actual tactical and strategic decision-point for you as a player, which has been rather lacking in 4E. Do you open with a salvo, then charge into melee? Or save your one shot for when you might need it later in the fight? Or do you just go all out, blasting at everyone, and hope you can survive on melee until you find some more ammo?

Memorable Moment: the electro-grav guy (Sigismundo) made his first melee attack of the adventure, using his double-handed parking meter. The DM half-jokingly told Sam (Sig's player) that if he rolled a 20, money would come out of the parking meter. Sam said, "Ok" and rolled a 20. The Porker died, and Sigismundo spent a little while picking up the scattered change.

We owned the Porkers and Badders, and attempted to bash our way through the door of a tower. After a few bounces, we decided to let Con (Con Flagration, the burningman) stand beside it for a couple of minutes. After he'd charred it sufficiently, Harvey kicked it in and...

Wait, have I mentioned the Alpha Mutations?

At the start of every encounter you draw a card which gives your character a short-lived mutation. Gamma World is highly... broken. The in-universe world, that is. Everyone is broken and everything's mutated and there's radiation everywhere, and it's all messed up. So a little mutagenic flux every now and then is nothing out of the ordinary.

As Harvey kicked in the charred door, his plant-nose mutated into a two foot long proboscis which he could use to suck life-force from his enemies. Inside the tower were some more Badders, and a huge flying lion with laser eyes (yes, really.) We swarmed the yexil (the laser wyvern-lion), and the cat (by now nicknamed Flash [Ahh-ahh] by everyone) kept scratching its eyes while Harvey sucked its blood, Con surrounded it with himself (oh yeah he's also a doppelganger, so he can summon a 1-round minion identical to himself in every way, including equipment and fire-damage aura), and Sig bashed it with his mighty meter.

And so on. We killed the yexil, mopped up the badders, cleared out another room full of badders that also contained a magic hi-tech health-moving machine (which would drain HP from non-friends, and channel it to friends of whomever had hit the button last.) Harvey was knocked unconscious in this room, but the machine brought him back, eventually. Then in the caves beyond we came upon a bunch of radioactive birds and giant radiation-beaming moths.

At this point I decided to change my strategy, engaging the what I assumed to be highly mobile and dangerous monsters from a distance, going all-out with my shotgun. Harvey charged into the room, and with one shot was dropped from 15HP to 3. Then a moth zapped him with radiation, and he dropped to the floor. A few rounds later the birds and moths were defeated, but by then I'd failed a few too many death saving throws, and Harvey was a goner. And I didn't feel (that) bad about it at all. I felt like Harvey had lived through more adventures, and achieved more, in his 0.8-level journey than many three or four level characters I've played.

More Memorable Moments:

  • Sig found a leaky plasma rifle. His 7d8 attack dealt 38 damage, one-shotting the bird or moth or whatever it was he was aiming towards, and incidentally introducing a new light source in the cave (by drilling a narrow hole through to the surface). It leaked, dealing 2d8 damage to him. He rolled 2.
  • After the machine-room fight, Harvey's new alpha mutation was that he could breathe water and swim at his move speed. Some powers have an "overcharge", where you basically crank up the effect, but have to save against it backfiring. I got the DM to roll the save for this one, so I didn't know the outcome. Because: Overcharge: At any time, you can roll a d20. 10+: While this card is readied, you can mentally communicate with fish within a mile of you. 9 or less: While this card is readied, you think you can mentally communicate with fish within a mile of you.

Over all, I'd say that Gamma World is brilliant, and if you have a chance to play it, totally take it!

... Matty /<

]]>
Sat, 03 Sep 2011 12:00:00 +1000 348217aca131c757ea7ec73f079c0ffb
Open Letter - Nuclear Power (2011-07-04) https://matthew.kerwin.net.au/blog/?a=20110704_open_letter_nuclear_power An Open Letter – Nuclear Power

I've been involved in a bit of a discussion with my local federal member about energy costs, and have decided to post my most recent contribution as an open letter. I want more learned people to give me understanding, and I want more people to be involved in this sort of discussion.


Hi Ewen, thanks for getting back to me so quickly. This will probably be a rhetorical debate, so don't address it until you have a little time and are feeling like a bit of a ramble.

On 1 July 2011 13:22, Jones, Ewen wrote:
> Matthew,
>
> If you are worried about the rising costs of energy, then nuclear is
> probably not your answer. It will reduce emissions but it is expensive and
> you do have to do a lot of work with the waste. I understand that the
> Chinese are developing smaller nuclear power stations where the waste will
> be reduced significantly using Rare Earth mineral (we are a supplier).
>
> For mine, we have to look no further than coal for our power. It is cheap
> and abundant. What we have to do is control the emissions and JCU’s algae
> programme is the answer there.

My two points in the survey about energy should probably have been separated, because as you noted, in the short term they're completely at variance. My concern about rising costs of energy is two fold:

On one hand, I've recently lost my job and am having trouble finding another in the Townsville region, so any rising costs are particularly stressful, and energy is a necessary resource which seems to be increasing in cost disproportionately, and there's not much I can do about it – so I whinge at the government. I want the quick fix, silver bullet that makes power cheap right now.

On the other hand I have intuitive concerns about long-term costs and growth. We dig and burn our own coal, which makes Australia self-sufficient and I support that completely, but coal is to my mind still an industrial revolution-era power source. It produces all the energy we need, but I feel like we've pretty well exhausted it in terms of innovation – current projects seem to be focused on either squeezing the last vestiges of efficiency out of the process, or increasingly, dealing with wastes. While carbon scrubbing and recycling is a brilliant area of research, and can be useful in other areas than just coal power waste management, I don't believe it can be used as a justification for continued emphasis on coal power.

Talking about waste management, we come back to one of your points against nuclear power. If I may be flippant: how much work is it to deal with nuclear waste, in comparison to scrubbing gases and burying mounds of toxic ash and inventing ludicrous carbon-pricing schemes? I know that, in political and social terms, "nuclear" is a very risky word, but its very stigma can work in its favour. We already know the wastes are risky, and we have the advantage that people have been working on reducing the risks and solving the issues since nuclear power was invented; in that respect it already has a big head-start on coal. There are still (foolish and misinformed) people out there who doubt that human carbon emissions are having a negative effect, but *nobody* says nuclear waste is ok.

Back to innovation and development: you said yourself, the Chinese are developing potential (safer, cleaner) nuclear alternatives and we're supporting them. Why shouldn't we take a more active role in the process? Australia is a brilliant technical nation, but as far as the world (especially China) is concerned, we're just a big pile of dirt with valuable minerals in, and a couple of universities on top. If we produce uranium and rare earth minerals, and provide great technical knowledge and training at universities, why do we export all of that, and leave ourselves just coming up with better ways to burn rocks? We could benefit from the energy, and boost our reputation at the same time.

It is very expensive, and I understand that the capital costs in setting up a nuclear energy system are probably the biggest barrier, and yes it would increase short-term costs to everybody noticeably; but if a government is willing to sacrifice itself on the issue of coal wastes, shouldn't nuclear power be just as worthy a cause? And, out of my own interests, do you have any approximate numbers (the vaguest ball-park figures) comparing the cost of starting and running a nuclear power scheme vs coal, taking into account regulatory costs, taxes, etc. All I could find is this page <http://www.nucleartourist.com/basics/costs.htm> which is both American and found on a pro-nuclear website.

As I mentioned, I'm just an out of work computer programmer, not a nuclear, political, or economic expert, but these are my thoughts and understandings on the issue of energy costs. Please feel free to correct any wrong assumptions, and address any missing concerns, and thanks for taking the time to read this unfounded and rhetorical essay.

]]>
Mon, 04 Jul 2011 12:39:00 +1000 265a40370b0545e0e9651b90fe868105
JS: <code>/x*$/</code> in global replace (2011-06-08) https://matthew.kerwin.net.au/blog/?a=20110608_javascript_global_regexp Javascript Gotchas 1 – /x*$/ in global replace

Here is a sample javascript function:

function quote(s) {
    s = s.replace(/^\s+|\s+$/g, '');
    s = s.replace(/^"*|"*$/g, '"');
    return s;
}

The two lines of the function are intended to do the following:

  1. remove any number of whitespace characters at the start or end of the line (by replacing them with an empty string); and
  2. replace any number of (including zero) double-quotation marks at the start or end of the line with one double-quotation mark.

For example, the string Hello world should come out as "Hello world", and the string "Oh wow" should come out unmodified; etc.

Unfortunately, as highlighted in this question on StackOverflow, it doesn't quite behave as expected. The combination of zero or more [characters] followed by the end-of-string and the global regular expression parameter to String.prototype.replace combine in an unfortunate edge case where the machinery of the regular expression engine and replacement algorithm must be understood before the behaviour can be predicted.

Put simply, "Oh wow" comes out as "Oh wow""

Thom Blake provided the following explanation, which I will copy verbatim. Click to show the explanation below:

Essentially: because there is no actual "end-of-string token" the regular expression engine can't consume the $, and because the String.prototype.replace function relies on explicitly falling off the end of the string, our function:

  1. matches zero or more (i.e. one) quotation mark at the end of the string and replaces it with a single quotation mark,
  2. repeats the match at the current location (i.e. immediately after the newly-added quotation mark), because the previous match was at a different location in the string,
  3. matches zero or more (i.e. zero) quotation marks at the end of the string and replaces it with a single quotation mark,
  4. attempts to repeat the match at the next position in the string (because the previous match was at the same location), but this falls off the end of the string, so it stops.

Note that it's not because we're replacing quotes with quotes; it's only because we're matching against zero or more [something] at the end of the string and finding "more" and then "zero".

So unfortunately there is no way to use a single regexp replacement to achieve our goal. A working solution is to remove all quotation marks from the string, and then affix a pair. This has the added advantage of santising the string, and ensuring matched quotation marks.

function quote(s) {
    s = s.replace(/^\s+|\s+$/g, '');
    s = s.replace(/^"+|"+$/g, '');
    s = '"' + s + '"';
    return s;
}

Examples: [src]

  • not working:
  • not working:
    • note: this one replaces zero or more quotation marks with X, so if the input string ends with " it should acquire a double-XX at the end
    • note also: because we replace zero or more quotation marks, this acquires an infinite number of Xs at the start and end of the string with more clicks
  • working:
    • note: this one uses two separate, non-global replacements
  • working:
    • note: using the function that strips quotes and sanitises

Update:

Apparently Ruby's regular expression engine behaves the same way:

irb(main):001:0> 'foo'.gsub(/\A#*|#*z/, '#')
=> "#foo#"
irb(main):002:0> '#foo'.gsub(/\A#*|#*\z/, '#')
=> "#foo#"
irb(main):003:0> '##foo'.gsub(/\A#*|#*\z/, '#')
=> "#foo#"
irb(main):004:0> 'foo#'.gsub(/\A#*|#*\z/, '#')
=> "#foo##"
irb(main):005:0> 'foo##'.gsub(/\A#*|#*\z/, '#')
=> "#foo##"
irb(main):006:0> '##foo##'.gsub(/\A#*|#*\z/, '#')
=> "#foo##"
]]>
Wed, 08 Jun 2011 16:29:00 +1000 dcef0d89a71f2605ca2f199279dc10d0
Ruby: <code>return</code> in <code>ensure</code> (2011-05-09) https://matthew.kerwin.net.au/blog/?a=20110509_ruby_gotcha_1 Ruby Gotchas 1 – return in ensure

The difference between Ruby's implicit “value of a block” and its explicit “return from a method with a value” is best illustrated by example.

def foo
	puts 'a'
	0
ensure
	puts 'b'
	1
end

irb(main):001:0> foo
a
b
=> 0

def bar
	puts 'a'
	return 0
ensure
	puts 'b'
	return 1
end

irb(main):002:0> bar
a
b
=> 1

The explicit return statement literally means “stop the method right now, and use this return value”, whereas in uninterrupted flow the ensure sub-block's final value is superseded by the method's.

The gotcha happens thus:

def baz
	puts 'a'
	raise 'fail'
ensure
	puts 'b'
	1
end

irb(main):003:0> baz
a
b
RuntimeError: fail
	from (irb):19:in `baz'
	from (irb):34
	from /usr/bin/irb1.9.1:12:in `<main>'

def freb
	puts 'a'
	raise 'fail'
ensure
	puts 'b'
	return 1
end

irb(main):004:0> freb
a
b
=> 1

The same thing happened in Ruby 1.8 – it's a language feature. This is partly why I don't use return statements, except when I really want to be explicit about breaking the flow.

]]>
Mon, 09 May 2011 09:35:00 +1000 942ceceee7e4b71fb6f8c8abbfb63680
3d6 Revisited (2010-07-28) https://matthew.kerwin.net.au/blog/?a=20100728_3d6_revisited 3d6 Revisited

After a few sessions (has it been two months already?) using 3d6 instead of d20, I've come up with a few revisions and clarifications of the original rules. These are designed to increase the fun; the balance has to be accounted for elsewhere.

1,1,1: automatic miss.
  • If the power has the keyword "weapon" or "implement" you drop the appropriate item;
  • If not (or if not possible) you fall prone.
1,1,2 / 1,1,3 / 1,2,2: automatic miss.
2,2,2: roll 1d2 and add that to the attack
3,3,3: roll 1d3 and add that to the attack
4,4,4: roll 1d4 and add that to the attack
5,5,5: roll 1d5 and add that to the attack
4,6,6 / 5,5,6: automatic hit.
  • If the attack would have hit anyway, replace [W] rolls with their maximum (crit).
5,6,6: automatic hit.
  • If the attack would have hit anyway, add maximum [W] damage (high crit).
    • If high crit weapon, add it again.
6,6,6: automatic hit.
  • If the power deals damage, add maximum [W] damage (high crit).
    • If the attack would have hit anyway, add it again
      • If high crit weapon, add it again.
  • If not: use the effect progression *

* The "effect progression" is designed so you don't feel let down when see you that glorious: ⚅⚅⚅ but don't have anything to maximise. It came up when our first triple-six was the wizard casting his (non-damaging) "sleep" spell (the houserule there is: the targets automatically fail the first save, and if I'm feeling generous they don't get the one round of slowedness first.) Since then I've come up with a basic guide to letting everyone enjoy the potency of that 1-in-216 chance.

For "negative" effects my progression is:

pushed/pulled/slid prone dazed stunned unconscious ¿ 1d4 damage ?
slowed immobilised

Of course, if it's obvious that a particular immobilising action should progress to restrained, for example, that's perfectly alright too. Not to mention blinded, deafened, dominated, weakened, etc.

The "positive" progression is a bit trickier, because there are basically the three parallel progressions:

cover superior cover insubstantial
lightly obscured heavily obscured totally obscured
concealment total concealment invisible

... But at any point it could be totally reasonable for one branch to lead into another, or somewhere else. Those ones I just play by ear.

The overall goal with these progressions is that I want to make my players feel like they succeeded so well (or their spell was so potent) that the effect is the same, but turned up to 11.

... Matty /<

]]>
Wed, 28 Jul 2010 22:30:00 +1000 1f4ab2f00bdeb83078cb4bf2208707f5
d20 or 3d6 (2010-05-12) https://matthew.kerwin.net.au/blog/?a=20100512_d20_or_3d6 d20 or 3d6

I don't know how to say this, so I'll just come out and say it:

I'm going to replace my players' d20 attack rolls with 3d6.

Since it's my first time DMing 4e, and we've decided to start at level 5, I reckon there are enough differences already that making one other change won't cost too much in terms of organisation. Sure, I'll have to balance some of my encounters a bit differently, but I did anyway (I've never made up a l5 encounter before).

I've decided arbitrarily that critical hits and misses will work thus:

  • 18 - guaranteed hit, 2 x max total damage
  • 17 - guaranteed hit, 1 x max total damage, plus regular damage roll (including all modifiers and bonuses)
  • 16 - guaranteed hit, 1 x max total damage (i.e. regular 4e crit)
  • 15-6 - regular attack (compare to defense, etc.)
  • 5-4 - miss
  • 3 - critical miss; if the attack had the "weapon" or "implement" keyword, you drop the weapon/implement in question; otherwise you fall prone.

I guess there are loop-holes and questions, most of which can be sorted out at the table, like: high crit weapons (which will probably add a [second] damage roll on 16+); or what happens when you roll 3 and fall prone while flying; etc. If/when we find good resolutions for these and other issues, I may share them.

I'll keep the d20s for the regular ability checks, since.. well, who cares, really. And I might let Elite and Solo monsters roll 3d6 for attacks too, depending on the situation, since that will add spice (and hopefully fun) to the fights. Apart from that, regular 4e all the way, I guess, and we'll see how it goes.

First game tomorrow night.

p.s. Cthulhu is not invited

... Matty /<

]]>
Wed, 12 May 2010 21:44:00 +1000 da308ad61d47a28464d2fa2c6db63522
Software Design (2010-03-10) https://matthew.kerwin.net.au/blog/?a=20100309_software_design Software Design

Lemma: the primary function of any computer program is to produce outputs derived from some inputs.

The correctness of a program is defined by its ability to consistently produce the correct/appropriate outputs for any set of inputs. To this end, every program could, given an infinite storage and appropriate retrieval mechanism, be represented by a finite state machine. All computer programs are bounded by limits (the size of an integer, the amount of addressable memory, etc.), so they theoretically have a finite number of possible inputs. Given time to generate the full set of inputs, and the appropriate outputs, every program could be represented by a single lookup operation. Put another way: every program essentially represents a transformation of a set of inputs to a set of outputs, and that transformation is entirely deterministic.

The most important piece of information to take away from this is: how a program transforms its inputs to outputs is irrelevant.

That statement requires some qualification.

The transformation of inputs to outputs is primarily measured by the correctness of the outputs, however other metrics can be applied, depending on the nature of the program. I assert that the most important metric (after correctness) is the time taken to derive outputs from inputs. Since our usual tolerance for erroneous output is "none", it can be assumed that all programs produce the "right" outputs for a given set of inputs — any program that doesn't is buggy. Since all programs produce the same outputs, a good differentiator between programs is: how quickly they do it. In all cases, the faster the results are made available, the better. As such, how a program transforms its inputs to outputs is relevant, since a "better" algorithm may produce results more quickly than a "worse" one.

A third metric, easily overlooked in the modern age (especially with "infinite storage" and "infinite processing" options provided by things like cloud computing), is data usage. To whit: using less space is better. This has been accepted as fact since the dawn of computing, possibly born of the historical need to work within the limits of a computer's hardware. Trade-offs can and will always be made between space and time, trying to find the appropriate balance, so that correct outputs are produced in reasonable time without a chance of over-allocating the computer's available resources.

That is the key to the third metric: resource allocation and utilisation. If your transformation's "how" makes optimal use of the computer's resources, to produce the correct outputs in minimal time, that transformation is as good as it can be.

It's late and I should be in bed, so I'll stop rambling and summarise the steps of software design as I see them:

  1. define the outputs that are considered "correct" for any set of inputs — use boundaries, induction, arithmetic, really really big truth tables, whatever is required
  2. define how quickly the outputs should be derived (Hint: the answer is usually "as fast as possible")
  3. enumerate the resources available on the target computer/platform; processors, storage (registers, cache, memory, hard discs), I/O devices, everything
  4. sketch out an algorithm that will produce the correct outputs for all inputs, and in tandem devise a resource allocation scheme that supports it
  5. write it

Fortunately that fifth step is pretty well understood, and is actually (probably) the easiest of all. Really it all comes down to step four — when you find a good solution to that one, the world is your mollusc.

... Matty /<

]]>
Wed, 10 Mar 2010 00:59:59 +1100 f3845d4c02c2449da1e2d437f6e9c747
Wide Screen Web (2010-02-11) https://matthew.kerwin.net.au/blog/?a=20100211_wide_screen_web Wide Screen Web

Every screen these days is wide. As if the traditional 3:4 landscape ratio wasn't hard enough for presenting text, now we have lap-tops with 800 pixels crammed into a six-inch vertical frame, and 1.6 times that many horizontally; yet we're somehow expected to present information — textual information — in this environment. I don't know about you, but I have trouble scanning text when the lines are too long. I'm no design or typesetting guru, but I'm pretty sure it's a Bad Thing™ to make your text too wide (why else would newspapers use columns?) The two solutions I've seen employed most on the web that attempt to deal with the issue are:

  1. make the text huge — which, yes, makes it easier to read, but the fact that you can only fit a couple of lines on the page makes it feel uncomfortable, and I have a feeling it limits our ability to scan the text; or
  2. create a portait-oriented panel, with acres of blank margin-space to either side — which is much neater, sure, and it looks more like printed paper and all that, but think of all those pixels going to waste! Something makes my inner programmer feel really uncomfortable seeing all that real estate laying fallow, or worse — filled with horrible wallpaper backgrounds. I actually dislike the narrow-column-with-wide-margins layout more than the hard-to-read, kludgy feeling huge text.

I have an idea — one which (surprise surprise) I'm pretty sure I'll never implement, but I'll put it out there anyway: imagine a web browser that defines its viewport as half the width of the screen (say 640px) but twice as high (1600px). Now imagine that insane 2.5:1 portait viewport split laterally and rendered side-by-side on the screen. Each of those columns would be around 1.25:1, not far off the √2:1 ratio we're used to with A4 paper. Because there would be, effectively, two (let's say) A6 pages side-by-side, we could quite easily present text to the user in a familiar format (i.e. like a book) without having to unnecessarily supersize anything, and without wasting half the screen's real estate on empty margins. We could put the scrollbar on the right, as per normal, and marvel at the awesome effect we get when the stuff at the top of the right page scrolls onto the bottom of the left page. I think that would be cool. Page Down could either flip a half-page (the right page onto the left, and the next page onto the right) or a whole page (like a real book). The finer details of the design are pretty free, but I reckon there's a bit of potential there to make something really intuitive, fun, and useful.

I've never actually seen an e-book reader, but apparently they do a similar thing when presenting high volumes of text. Why don't we have that for the WWW on our regular old general-purpose computers? I think it would be great. Now someone implement it for me, so I can see if I'm right.

PS yes I have a little 11-or-so-inch 1280×800px lap-top, which would roughly equate to two A6 pages (with some space left over for important things like skyscraper ads and sidebars), and at work (where the germ of the idea was originally planted) I have two 1680×1050px monitors: yes, that's a total of 3360 pixels wide! I could fit almost five A4 pages there, and yet most websites present me with a single (a single) column of text! What a waste!

... Matty /<

]]>
Thu, 11 Feb 2010 23:20:00 +1100 375f18e50b8932889be2c33bb6685431
Twits are for Twits (2009-11-06) https://matthew.kerwin.net.au/blog/?a=20091106_twits_are_for_twits Twits are for Twits

I propose that the word “twit” be banned, except when used as a pejorative — describing people who use the word “twit.” It is not a single piece of sound emitted by a small but pretty avian; nor is it the act of emitting that sound — the word you are seeking is “tweet.” The only word that even sounds like “twit” — twitter — is the raucous cacophony of constant tweeting made by many tweeters as they tweet their tiny tweets. Never has a product's name so eloquently described and defined its purpose and scope as has Twitter. And yet so many of its users don't understand, don't appreciate the subtlety and sheer brilliance of the name. Whenever I hear someone say “I will twit that” or “do you twit?” I feel that maybe there should be a basic comprehension test that users must pass before they can sign up to the service… but Twitter is just a vehicle for what is the most basic and fundamental human function: communicating in the tribe. If people fail the entrance exam for Twitter, should we have an entrance exam for life? And how many people out there would not pass? When I think these thoughts I get depressed.

... Matty /<

]]>
Fri, 06 Nov 2009 15:17:00 +1100 6043cbd4ea38ede2993699531106244f
PseudoRandom Number Generator (2008-11-15) https://matthew.kerwin.net.au/blog/?a=20081114_prng PseudoRandom Number Generator

My new web host uses the Suhosin PHP Hardening extension. This is all very well and good, and I appreciate the extra security I suppose it affords my site. Unfortunately, two options my host uses (suhosin.srand.ignore and suhosin.mt_srand.ignore) have the effect of completely disabling the seeding of both the rand() and mt_rand() pseudorandom number generators in PHP. Since some time around PHP 5 both PRNGs have automatically seeded themselves, so it's not a fatal thing to have happen; what it means, though, is if I were to — for the sake of argument — generate a document with random contents, I could no longer just record the seed if I wanted to recreate the document.

I can see that misusing the PRNG seeds could lead to security issues, for example if your site depends on the PRNGs to generate session info. However, since for the past few years manual seeding has been optional, wouldn't one assume that anyone in such a situation would either: a) use the built-in seeding, or b) make sure they pick a really good seed? Disabling seeding altogether doesn't patch up a vulnerability, it simply removes half the power of PRNGs.

To get around the whole issue, I implemented a dodgy PRNG (which I found on this site: http://computer.howstuffworks.com/question697.htm) in PHP. It was apparently invented by K&R, so must therefore be awesome. Because PHP isn't C, my implementation is a pale imitation of the gloriously simple two-liner on the website, but it seems to work, and my documents can be recreated from just their seeds. I wouldn't want to use it for anything important, though.

If you're interested, you can find my PHP implementation here.

... Matty /<

]]>
Sat, 15 Nov 2008 00:19:00 +1100 4b3b262d673b28e9a1ef708feb31a554
Stupid Error Message (2008-11-02) https://matthew.kerwin.net.au/blog/?a=20081102_stupid_error_message Stupid Error Message

I just used my wife's PDA to send an SMS to my two Dungeons & Dragons players. The SMS application is set up with an email-like interface, so after typing out my message, I made an assumption and added both phone numbers to the To: line, separated by a comma. When I hit send, I received some sort of warning dialog that I didn't understand and can't remember, possibly about unrecognised recipients; so I ignored it. Then, I got what has become my favourite error message ever. I will paraphrase it:

You separated the recipients with a comma; replace the comma with a semicolon and try again.

I don't usually talk out loud to computers and machines, but this time I did. I believe I said something like "Do it yourself you stupid thing." If it can detect the exact problem, and also suggest the single, precise solution, why not just do it silently for me? Even my wife agrees, it's a stupid error message. This is the sort of thing that makes Google applications feel so good to use — they just work. More application developers should keep that in mind.

... Matty /<

]]>
Sun, 02 Nov 2008 09:50:00 +1100 0a52716446cdd452e97a0bfe64d8314a
Movember (2008-11-01) https://matthew.kerwin.net.au/blog/?a=20081101_movember Movember

Hi All,

During Movember (the month formerly known as November) I'm growing a Mo. That's right I'm bringing the Mo back because I'm passionate about tackling men's health issues and being proactive in the fight against men's depression and prostate cancer.

To donate to my Mo you can either:

  1. Click this link and donate online using your credit card or PayPal account, or
  2. Write a cheque payable to ‘Movember Foundation’, referencing my Registration Number 1569666 and mailing it to:

Movember Foundation
PO Box 292
Prahran VIC 3181

Remember, all donations over $2 are tax deductible.

The money raised by Movember is used to raise awareness of men's health issues and donated to the Prostate Cancer Foundation of Australia and beyondblue - the national depression initiative. The PCFA and beyondblue will use the funds to fund research and increase support networks for those men who suffer from prostate cancer and depression.

Did you know:

  • Depression affects 1 in 6 men....most don't seek help. Untreated depression is a leading risk factor for suicide.
  • Last year in Australia 18,700 men were diagnosed with prostate cancer and more than 2,900 died of prostate cancer - equivalent to the number of women who will die from breast cancer annually.

For those that have supported Movember in previous years you can be very proud of the impact it has had and can check out the details at: [ Fundraising Outcomes ].

Movember culminates at the end of month Gala Partés. If you would like to be part of this great night you'll need to purchase a [ Gala Parté Ticket ].

Thanks for your support.

More information is available at http://www.movember.com/.

Movember is proudly grown by Holden and Schick.

Movember is proud partners with the Prostate Cancer Foundation of Australia and beyondblue - the national depression initiative.

... Matty /<

]]>
Sat, 01 Nov 2008 16:37:00 +1100 5d5cc7a5219a105a8728669327a8cf79
Disabled Menus (2008-09-27) https://matthew.kerwin.net.au/blog/?a=20080927_disabled_menus Disabled Menus

Someone (Joel Spolsky?) wrote that you should never disable a menu item. At the time, I thought he was just being a puritan, but just recently I, too, have become a subscriber to the idea.

Last night I was up until after 2am playing Spore — I progressed my horribly disfigured shark-wasp into the space age, and had begun the initial process of colonising nearby planets. Then, at about the time my wife wanted me to shut off the computer, I realised — I couldn't save my game! I tried all the apparently logical things: I completed (or abandoned) all my pending missions; I returned to my home planet again and again; I hovered over all the cities; I searched the menus; I did everything I could think of until eventually I got sick of seeing that greyed-out "Save" menu item, with no explanation of what to do to make it not-grey. So I prayed for an auto-save at the civisilisation-to-space changeover, and exited the game without saving.

Today I discovered that, in fact, the last time the game's progress had been saved was when I did it manually towards the end of the civilisation level. I'd lost the end of my cilivisation phase, my beautiful space ship, and the planets I'd been exploring and working on. I basically had to start over; which I did, mind you, at full force. (The new space ship is much scarier looking than the old one.. I think, although I can't exactly remember it.)

The other thing I discovered today, since I had time to explore the system, was that during the space phase you can save while in orbit around a planet, not inside its atmosphere. I don't recall seeing that written anywhere — most significantly, nothing at the game's main menu even hinted that, for example, "You can't save while inside a planet's atmosphere." All I had to go on was a greyed-out "Save" option.

At least they could have had a useful tool-tip. Even better, they could have discoloured (but not disabled) the menu item, and changed its behaviour to display a dialog explaining why I can't save at this particular juncture. Turning the menu item off without offering the slightest hint as to why, or what to do to get it back, is just infuriating for the user. I didn't care what technical reason there was for it — it's bad enough that as soon as a level transition occurs you have to sit through the whole "design your new whatever" process without the ability to save it for later; but I can understand that one. But why the hell I couldn't save my game, even after trying to address all the technical issues I could imagine, was purely aggravating.

Since being on the receiving end of this nastiness, I'll endeavour to not make the same mistake with any user interfaces I end up designing in future.

P.S. somewhat relatedly: the other day I worked out how to sync Media Player and my wife's PDA. Even using ActiveSync it was tortuous. The tips about what to do in Media Player — the tips inside Media Player — were wrong, or at least referred to menus and items that Media Player didn't have. No wonder Apple blows them away with a couple of gimicky UI features and a bit of polish. Microsoft's user interface really sucks.

... Matty /<

]]>
Sat, 27 Sep 2008 20:58:57 +1000 2c1107caf769746bee6b0a67a4fa34e8
Monty Hall (2008-04-16) https://matthew.kerwin.net.au/blog/?a=20080416_monty_hall Monty Hall

You may have heard a bunch of hype lately about the Monty Hall problem, that keeps raising its head from time to time. The problem, stated unambiguously (and thus verbosely), is this:

Suppose you're on a game show and you're given the choice of three doors. Behind one door is a car; behind the others, goats. The car and the goats were placed randomly behind the doors before the show. The rules of the game show are as follows: After you have chosen a door, the door remains closed for the time being. The game show host, Monty Hall, who knows what is behind the doors, now has to open one of the two remaining doors, and the door he opens must have a goat behind it. If both remaining doors have goats behind them, he chooses one randomly. After Monty Hall opens a door with a goat, he will ask you to decide whether you want to stay with your first choice or to switch to the last remaining door. Imagine that you chose Door 1 and the host opens Door 3, which has a goat. He then asks you "Do you want to switch to Door Number 2?" Is it to your advantage to change your choice?
[http://en.wikipedia.org/wiki/Monty_Hall_problem]

Here is how I solved the problem, in my head:

  • Since I know beforehand that Monty will eliminate one of the goat doors after I make my choice, I can imagine that the two goat doors are actually one.
  • Because that "one door" is represented twice, I have a ⅔ chance of picking it. Therefore, I have a ⅓ chance of picking the car door.
  • If that doesn't make sense, consider that behind each door is a big tube, and two of the tubes lead into a room containing a goat, and the other tube leads into a room containing a car. There is one goat, but I have a ⅔ chance of picking it.
  • When one of the goat door options is then taken away, I can choose between two actions:
    • stick with my original choice, which has a ⅓ chance of being the car
    • swap my choice to the door that, since it is my only remaining option, has a ⅓ chance of not being the car.. that is, it has a ⅔ chance of being the car

Said another way:

  • If I picked the first goat door, the host has to reveal the second goat door. There is a ⅓ chance of this happening.
  • If I picked the second goat door, the host has to reveal the first goat door. There is a ⅓ chance of this happening.
  • If I picked the car (⅓ chance), the host can either:
    • reveal the first goat door (⅓ × ½ = ⅙ chance of this happening)
    • reveal the second goat door (⅓ × ½ = ⅙ chance of this happening)
  • Of the four outcomes, I have a ⅓ + ⅓ = ⅔ chance of having the goat if I don't switch, and a ⅙ + ⅙ = ⅓ chance of having the car. That seems obvious, right? So then if I do switch, I have a ⅔ chance of switching away from the goat, to the car. So I have a ⅔ chance of winning by switching.

Said yet another way:
Each door has a ⅓ chance of being the car. If we split the doors into: the one I picked, and the two I didn't pick, the "one I picked" has ⅓ chance of being the car, and the chance that the car is in the "two I didn't pick" is ⅔. One of the two I didn't pick is revealed as a goat door, so it has a 0 chance. Therefore the other door I didn't pick must have (⅔ - 0 = ⅔) chance of being the car.

There is another path to consider:
If I know beforehand that my first choice doesn't matter, because after I make it Monty will eliminate one of the doors, I can "throw away" my first choice. After Monty gets rid of one of the doors, I know that there is a ½ chance that I will pick the car door. By discarding the past, I now have a 50% chance of getting the car. That's true and correct and all, because if you eliminate past knowledge, you really do have a ½ chance of picking the right door. But why would you do that? There's a perfectly valid option that gives you a ⅔ chance, which is much better than ½. Why not pay attention to past information and use it to your advantage?

... Matty /<

]]>
Wed, 16 Apr 2008 22:05:09 +1000 064179f1cf4fe5bfddaef9f77fc98add
Collision Detection (2008-03-09) https://matthew.kerwin.net.au/blog/?a=20080309_collision_detect Collision Detection

I had an idea about detecting object collisions in 3D worlds while walking my daughter in her pram. I'm certain people already use this idea in the real world, but I can't be bothered researching it to find out who does. The idea stems from experience I've had in dodgy little game apps (like an assessment piece I did in second year university) where you need to detect object collisions (for example, to make pool balls bounce,) and the simplest way to do it is by testing the relative locations of all the objects at a discrete point in time (like, each frame.)

This works well for the most part; you quickly work out how to find two overlapping objects and apply forces to each of them in the exact opposite directions from each other. Then you discover the "sticky objects" phenomenon, where two objects will overlap each other too much, and when you test their locations again next frame they're still overlapping.. then they either get stuck together forever, or spontaneously shoot away from each other at an appreciable fraction of the speed of light.

So then you work out that when you find to overlapping objects, the first thing you do, before applying the force or moving on to the next frame, is move them apart so they are no longer overlapping. And then comes the hyperspace phenomenon: when objects move really fast they go right through each other. At that point, discrete location testing fails.

The solution, obviously, is using continuous (not discrete) location testing. It's so obvious. It's surprising anyone would even consider doing it any other way. Right? The only problem with using continuous location testing is this: how? I have a theoretical solution which I've not turned into code, and can't be bothered turning into code, but if I did I'm sure it would work fine.

First: instead of picking a point in time and saying "the object is here," you need to pick two points in time and say "the object moves from here to there." If you're willing to assume that an object moves in straight lines (at least over short periods,) you can easily model the volume of space occupied by the object over the period by drawing lines from here to there. The space inside the lines contained the object at some point between then and now.

Second: find collisions between the volumes the way you'd find them between any objects. Keep track of the overlapping volumes.

Third: when you have collisions, for each object involved: work out when it first entered and last exited the overlapping volume. If no two objects were actually in there at the same time, it's not a real collision. Otherwise, calculate the positions and velocities of the objects now as if they'd bounced off each other at the time of the collision. And there you have it.

Obviously there're plenty more details to include, like multiple object collisions, finding the first collision, finding subsequent collisions, etc. But those are just details. I've done the hard work already. Now someone should go implement it. Preferably in the same game that uses the AI I proposed in an earlier post.

On reflection, you could probably simplify it too, and only do the volume projections for fast-moving objects (like bullets). I think calculating a long bullet (one that occupies all the space the bullet would occupy over a period of time) would be sufficient for an FPS, unless you want hyperspace bullets that sometimes go through things without leaving a mark.


FOV By the way, I'd like to add to my last post in the part about perspective projections, where I gave { x, y, z } → { x / (z + 1), y / (z + 1) } ; that forumla, especially the 1, was actually derived from a more complicated calculation that added a new variable, the distance to screen. The distance to screen represents the distance from the "viewer" (the focal point) to the "screen" (the plane onto which the image is projected.) That is just another way of saying the field of view, or the arc distance of the rendered scene. The image to right illustrates how increasing the distance to screen shrinks the field of view (the red viewer is about 2.8 times the blue viewer's distance from the screen, and the width of the visible area is approximately 60%) If you really want to know, the field of view can be calculated as 2 × atan(screen width / (2 × distance to screen)). I chose 1 because I felt like it. For a 2-pixel wide scene at one unit per pixel that gives a FOV of 90°; for 1600px it gives 179.86°. GLX lets you specify the field of view, and from that and the projection settings calculates the distance to screen for you. So there you go.

... Matty /<

]]>
Sun, 09 Mar 2008 19:00:15 +1100 828cffea78d5f28a5d2f98dc6df7e8a0
Simple Rasterisation (2008-03-06) https://matthew.kerwin.net.au/blog/?a=20080306_simple_raster Simple Rasterisation

My friend Sir Bob asked me yesterday if I knew anything about drawing something like a three-dimensional box on a flat screen, like in Java, and have it so that if you were to drag the mouse, you could rotate the box.

Simply put: yes. I very briefly outlined the concept of rasterisation, how it works in a context like OpenGL, and how you could do it manually. This post is a bit of an elaboration on the "how you could do it manually" part, with basic geometry.


These are simple ways of rendering 3D objects in a 2-dimension space (like a computer screen.) Libraries like OpenGL don't do this, OpenGL uses transformation matrices; but those matrices are derived from simple geometry like this.

Projections

Converting from object-space to screen-space. For example, you may have a line from {0, 1, 1} to {1, 2, 0}, and you want to convert it to a line on the 2-dimensional screen.

Fixed POV Orthographic Projection

Ignore the z part.
{ x, y, z } → { x, y }
For example: {0, 1, 1} becomes {0, 1}, and {1, 2, 0} becomes {1, 2}

Fixed POV Perspective Projection

Shrink things that are further from the view-point.
{ x, y, z } → { x / (z + 1), y / (z + 1) }
For example: {0, 1, 1} becomes {0, ½}, and {1, 2, 0} becomes {1, 2}

Rotation

Moving the object in its 3-dimensional space, by rotating it about the point {0, 0}.

Rotation About the Y Axis

Rotate the vector { x, z }

1. current angle =  ⎧ 90°   .. if x = 0, z > 0
⎨ 270°  .. if x = 0, z < 0
⎩ atan(z / x)  .. otherwise
2. distance = sqrt(x2 + z2)
3. new angle = current angle + rotation
4. new point = { x = distance × cos(new angle), z = distance × sin(new angle) }

Note: the y value doesn't change. I'm sure you can work out why not.

Rotation About the X Axis

Rotate the vector { y, z }
(as above, using y in place of x)

Rotation About the Z Axis

Rotate the vector { x, y }
1. Easy solution: let the 2D render surface take care of that (eg. Java Graphics2D's .rotate() note: in the documentation for rotate you can see a two dimensional transformation matrix.)
2. Real solution: as above, using y in place of z

Rotation About an Arbitrary Y Axis

To rotate an object about a Y axis that isn't at {x=0, z=0}, you do the same maths as above, but translate the x and z terms so they are relative to the axis.

For example, to rotate the line above by 90° about the axis {x=-1, z=1}:

  • the point {0, 1, 1} is translated to {1, 1, 0} by subtracting the axis; then rotated to {0, 1, 1}; then translated to {-1, 1, 2} by adding the axis again
  • the point {1, 2, 0} is translated to {2, 2, -1}, rotated to {1, 2, 2}, and translated back to {3, 2, 1}

Then to draw the line in a perspective view, the screen points are: {-0.33, 0.33} and {1.5, 1}


Disclaimer: I worked most of this post out in my head, so if it's wrong, tough.
Also, there are additional factors you can introduce, such as a field of view, clipping box, etc. that give you a bit more control over what you draw.

My general rule when doing anything graphical is this: don't be afraid to hold your hands out and imagine what is happening to the objects. Visualising everything is key to understanding it, much more so than the maths.

... Matty /<

]]>
Thu, 06 Mar 2008 23:04:03 +1100 50824460ab3656a6006e5490d6b7e136
Grim Fandango (2008-02-25) https://matthew.kerwin.net.au/blog/?a=20080225_grim_fandango Grim Fandango

Platform: PC (Windows 95/98/ME/2000/XP)
Developer/Publisher: LucasArts
Released: Sept 30, 1998

"Hey kiddles, check out my bone-saw.."

What's Right

Aside from the issues I'll list below in "What's Wrong" just about everything about Grim Fandango is right. For me, though, the most right thing about Grim Fandango is the ambience. The graphics, while very primitive by modern standards, convey all the information you need to achieve your mission goals and set the appropriate mood at the same time. It's quite impressive to see what could be achieved back in the days when 3D was new and interesting, and we didn't have HWTL, Shader 3.0, DX10, ...

The artwork is astounding, with such attention to detail and consistency of the theme, sometimes I've wandered around a location just looking at the fine details on the edges of things — places where regular gameplay wouldn't lead you, or if it did, not in a way that would suggest you take the time to look at the scenery. I should also note the progress indicator at the top of the load/save screen: as you make your way through the game, a bas relief of your journey is revealed; you really should take the time to look at it, everything is in there!

The music is perfect. It suits the overall style of the game precisely, really allowing you to get lost in the story; always adding an extra dimension to the level, never overpowering, always appropriate.

And then there's the story itself. The game puts you in the main role of a seedy pulp fiction crime novel, delving into the depths of the underworld as your hero drags himself to great heights, always to be dragged down again, on a journey that will eventually lead to his ultimate redemption. You know the formula. The thing that sets Grim Fandango apart from anything else I've ever seen in the genre is the fact that the entire game is based in the Land of the Dead. The main protagonist, Manuel "Manny" Calavera, and every other person in the game, is a skeleton — the sould of one who has died in the Land of the Living, making their way towards the Ninth Underworld according to the Aztec beliefs of the afterlife.

The game plays like one of those "choose your own adventure" books, where you are provided with a seried of context-dependant statements and responses. Fortunately Grim Fandango is scripted in such a way that you can never reach a dead end — the conversations are more for entertainment purposes and story development than actual path selection. And entertaining they are; I'm often content to explore the entire conversation space before selecting the response I feel will lead to the next stage of the game. It really helps you develop an appreciation of the depth of the admittely very stereotypical characters.

The last thing I'll say is right about Grim Fandango is the most important: it's really fun. Without that, it wouldn't be a game. But it is, it really is. I bought a copy the other day (my second or third, they always end up missing) and played through it in about a week — I really tried to drag it out and savour the whole experience. My response at the end is: I want more.

What's Wrong

My first complaint about Grim Fandango, one I've had since I first laid eyes on the game, is the fact that it sometimes crashes, unexpectedly and sometimes very spectacularly. I have a feeling it's graphics-related — the game is from the Direct-X 6 era and has some "experimental" features that probably vanished in the past three or four iterations of DX. (Hint: don't use hardware acceleration. It doesn't make a difference to the graphics, and any CPU made since the Pentium III era can handle the game without batting an eyelid.) It also has the annoying ability to save the entire gamestate — including the "locked-up-ness" flag, so if you, for example, played all the way through until the final year, had a small crash-to-desktop issue after returning to Rubacava and were forced to reload your last save game at the train tunnel leading to the world beyond, and accidentally hit some magic key combination (probably Ctrl+Shift+Enter, I can't remember) and the game locked up, and in a panic you saved over your save game, then every time you loaded that save game you would find yourself standing before the guardian of the portal, without any music playing, unable to move, and basically screwed.

My second complaint about Grim Fandango is the controls. You steer with the keyboard, holding Shift makes you run, and when Manny hits an edge (obstacle, well, ledge, etc.) he turns away from it. Sometimes shifting between scenes involves a good-the-first-time machinima cut scenes, such as Manny climbing up or down the last half of a ladder. If you bump the appropriate edge of the screen, you can find yourself forced to watch Manny make his way up and down those damned ladders several times — usually when you were in a hurry, which is why you overshot in the first place. Manny also has a tendency to miss doorways and ladders when running, but his short little legs walk so slow you often forget that taking your hand off the Shift key is an option. It's not a big deal, but it is irritating, and it can distract you a bit from the awesomeness of the game.

My third and final complaint is one you tend to forget while not playing: if you don't know what to do next, it's really, really hard to work out what to do next. Sometimes the steps required to solve a particular puzzle or situation can be very involved, and require a lot of back-and-forthing across the level; and if you don't know beforehand (or at least have a vague idea) what you intend to do, it can be very hard to work out what needs to happen to get it done. And some things are purely incidental, in that no amount of planning can really inform you. (Hint: if you use Meche's ashtray as she's about to take a puff on her cigarette, she might burn a hole in her stockings.) Having played through the game a few times, I generally have an idea what I need to do.. but even so, it's very hard to resist the temptation of Googling "grim fandango walk through" sometimes.

Overall

I have to say, Grim Fandango is one of my all-time favourite games. If I was the kind of person to apply a star-rating, I'd rate it 4½ out of 5. The bad things are so far outweighed by the good, and the spectacular things are so .. well .. spectacular, very few PC games in the past 10 years have come close to matching it. I strongly recommend everyone play Grim Fandango at least once (preferably twice, or more!)

So go. Now!

"Heh heh, stupid octopus."

... Matty /<

]]>
Mon, 25 Feb 2008 21:42:21 +1100 aebc64454ad3d6b119452bc0aadbdcac
Discordian Numbers (2008-02-21) https://matthew.kerwin.net.au/blog/?a=20080221_discordinumbers Discordian Numbers

I was thinking last night about the Fibonacci Numbers, as one does, and came up with these recurrence relations:

f(n) =0... n = 0
1... n = 1
f(n-1) + f(n-2)... n > 1
f(n+2) - f(n+1)... n < 0
Ϝ(n) =|f(n)| = f(|n|)
S(n) =f(n+2) - 1 = f(n) + f(n+1) - 1
Σ(n) ={i=0..n} f(i) =S(n)... n ≥ 0
0 - S(n-1)... n < 0

It turns out that the first sequence, f(n), is the good old Fibonacci Sequence, generalised to negative numbers. The others are mildly absurd, but that is appropriate for one who is the Prophet of a Religion of Discord.

Good Lord Omar said: all things happen in fives, or are divisible by or are multiples of five, or are somehow directly or indirectly appropriate to 5.

  • The fifth Fibonacci number: f(5) is 5. Coincidence? Probably.
  • Then, the sum of the first five Fibonacci numbers: S(5) is 12. As it would happen, the 12th Fibonacci number: f(12) has the value of 12×12. Coincidence? Yes!
  • Then, the sum of the first 12 Fibonacci numbers minus 12: S(12) - 12 gives 364. As it would happen, this is the precisely number of days in a year, according to the Religion of the Prophet St. Matty. Coincidence? Absolutely!

But according to the Prophet, all coincidences are the will of the Goddess. And that is as it should be.

If you're interested in some values of the relations, and are too lazy to work them out yourself, here are a few divinely generated by the holy prophet himself:

n...-12-11-10-9-8-7-6-5-4-3-2-10123456789101112...
f(n)...-14489-5534-2113-85-32-1101123581321345589144...
Ϝ(n)...144895534211385321101123581321345589144...
S(n)...-5633-2212-94-41-20-10012471220335488143232376...
Σ(n)...-8856-3322-129-44-1201012471220335488143232376...

Any mistakes are intentional, and must be accepted as dogmatic fact.
I love creating my own religion.

... Matty /<

]]>
Thu, 21 Feb 2008 00:00:00 +1100 f1d5a350007561a244fcdd0f7a091870
Polymorphic Construction (2008-01-25) https://matthew.kerwin.net.au/blog/?a=20080125_polymorphic_construction Polymorphic Construction

Heed these words of wisdom.

As a general rule in Java, never call a polymorphic method from a constructor. That is to say, never call a method that is neither static nor final, in the constructor of a class that is also not final. Because of certain ordering issues with the default construction behaviours, you can quite easily find yourself in an apparently absurd situation which is hard to understand without a lot of thought.

Take the following classes as an example:

public class A {
    public A() {
        validate();
    }

    protected void validate() {
        // Nothing to assert. Pass!
    }
}

public class B extends A {
    private final String blah = "blah";
    private final String fwee;

    public B() {
        fwee = "fwee";
    }

    @Override
    protected void validate() {
        if (!blah.equals("blah")) {
            throw new IllegalStateException();
        }
        if (!fwee.equals("fwee")) {
            throw new IllegalStateException();
        }
        // Otherwise pass!
    }
}

Can you spot the problem? No? Neither did I, at first.

If you compile these classes [zipped source, compiled jar] and construct a new B() you should receive a horrible NullPointerException from the first line of B.validate(). It turns out that when you execute a constructor such as B() the following things happen, in this order:

  1. general constructor — either super() or this() ... in this case super()
  2. default initialisation — fields that are defined outside of any method ... in this case blah = "blah"
  3. specified behaviour — the rest of the constructor ... in this case fwee = "fwee"

Because super() means A(), and A() calls validate(), we find ourself attempting to execute blah.equals("blah") before the default initialisation. So blah is null!

If you were a good (read: paranoid) programmer, you might have felt a bit nervous about assigning fwee = "fwee"; inside the constructor, since you probably realise that Java always calls another, more general, constructor (like super() or this()) at the start of every constructor — unless you explicitly call one yourself. But you, like I, may have been falsely comforted by the fact that both blah and fwee are explicitly final. Well, such comfort is false comfort, it would seem.

I don't know how to define, let alone phrase, the proper rule so I'll stick with my gross generalisation. Follow this rule, and there's one more bizarre Java bug you won't have to waste half an hour debugging in future.

Never call a polymorphic method from a constructor.

... Matty /<

]]>
Fri, 25 Jan 2008 00:00:00 +1100 5bc782bc161e9e6266d2934f370485bf
Game Bots 2 (2008-01-19) https://matthew.kerwin.net.au/blog/?a=20080119_game_bots_2 Game Bots (part 2)

I was just talking with my friend Glen about my post from yesterday about awesome AI bots that have to "see" in order to know where things are. If you can't be bothered reading the previous post, here's the basics:

Bots shouldn't have intimiate awareness of object locations. In Joint Ops, if there's a solid object (eg. wall) between you and a bot, it can't see you. If there's a soft object (eg. smoke, bush, curtain, net, etc.) between you, it knows where you are and can shoot with utmost accuracy. Additionally, as soon as a bot knows you exist, it tends to know how fast you're moving, and in what direction, so it will rarely miss with a leading shot.

The first-order approximation to a groovy solution is to have the bot render the world in three colours: black for background cruft, green for friendly players/objects, and red for enemies. It can then use this cheaty segmentation to classify objects and react appropriately.

The awesome solution is to have the bot build up a "mental map" of the static world, by moving around and seeing how things move (ie. close things move faster). Once it has an image of the static world, it can detect anomolies (ie. it can predict what it should see at any particular time/place/direction, and if there's a difference between the prediciton and what it really sees, that difference is probably a dynamic object). Then it can classify them/etc. however it pleases.

The idea I that struck while chatting with Glen is a second-order approximation to the solution, where we skip the stage of building a mental map of the static world, and just render the actual static world straight from the game data. Then we could render the real world, replete with dynamic objects, and differences can be collated/classified/etc. as per the final solution.

This solution pleases me in that 1) it doesn't require nearly as much AI, nor does it require a period of time for the bot to learn the world, and 2) it still allows camouflage to play a part. If you're wearing a speckly green uniform, and you stand amongst a bunch of speckly green grass, you're not going to stand out much. Especially if the bot works at low resolution and low colour depth.

At low resolution, with minimal effects (only the ones that matter, like smoke — bots don't care about eye candy), rendering the two frames, diffing the images, and doing some basic image segmentation/classification ought to allow someone to run a couple of such bots on a client machine, even while playing the game themself! The bottleneck is entirely in the classification part, which can probably be handled by cheating a bit, as long as it feels like the bots are seeing and thinking, instead of just cheating. I suggest that if a bot sees a reasonable sized blob of pixels that don't belong, there's nothing wrong with it being handed a hint by the game and being told which team that blob belongs to.

I am well pleased. Now someone must implement this bot, that can't see through smoke or bushes, is hindered by camouflage, and gets completely confused when you turn out the lights.

See Also: Game Bots
... Matty /<

]]>
Sat, 19 Jan 2008 00:00:00 +1100 5097dbda092254a791e11729e1c3297a
Game Bots (2008-01-18) https://matthew.kerwin.net.au/blog/?a=20080118_game_bots Game Bots

I have a budding idea about a FPS AI system that I'll never get around to writing. The inspiration comes from playing Nova Logic's Joint Operations games — particularly on downloaded community-made missions where insanely accurate enemy NPCs are placed to snipe (often with RPGs) from inside or behind dense foliage. We tend to call that "cheating."

My initial response was to cheat in kind, and modify the foliage textures to be completely transparent, thus letting me see the game essentially the way the bots do. However, that is cheating. I'd prefer a much more awesome solution. My idea is this: let the bots see what we see.

Let us have an AI that renders a display essentially the way players do (albeit probably much at lower resolution and complexity) and uses image analysis techniques to play roughly the way people do.

That is: a bot entering the game may know the shape of the terrain surrounding it, so it can map the pixels of its frame to the terrain. Then it can move in a predictable fashion, so that any pixels* in subsequent frames that don't move as though they're attached to the terrain can become foreground. It could use geometry to determine shapes of and distances to foreground objects. By moving around, the bot would be able to build up a basic spatial map of terrain and static objects. After developing a certain world awareness, the bot could detect movement that doesn't correspond with the static world; then ideally it would build a basic picture of the thing that is moving, categorise it (friend, foe, person, vehicle, etc) and respond appropriately.

* we could assume that the green pixels of a leaf will remain greenish as we move, so we can recognise it as the same object. Dynamic textures/lighting/etc. could be very confusing.

On a contemporary PC like mine, I'm guessing I could only host a single bot (dedicated), rendering frames at a low resolution like 640x480x8bpp. The majority of the work would be in image processing (rather than rendering — although my AGP card also limits the size of the image that can be passed back to the CPU in reasonable time), although for the most part I guess it would mostly be a case of detecting changes between images, and comparing them to predicted changes, which may not be too messy.

The first step could be to render the world in three colours: black for neutral/static, green for friendly, and red for foe. A concentration of red pixels probably means you're seeing an enemy up close, so you might like to shoot.

Both of these solutions (the awesome AI one and the dodgy shoot-at-red-stuff) make the seeing-through-foliage issue redundant, as the bot sees basically the same thing people see. However the former adds a new dimension: effective camouflage. Imagine wearing a ghillie suit, laying down near a bush, and the AI not hammering your position from 200 metres with a PKM! Or even from 2 metres! How awesome.

... Matty /<

]]>
Fri, 18 Jan 2008 00:00:00 +1100 b437e536a69de0c093804472daef4cdf
Addenda to Things Java Should Have (2008-01-06) https://matthew.kerwin.net.au/blog/?a=20080106_addenda_to_things Addenda to Things Java Should Have

On multiple inheritence in Java, I'd like to add some thoughts to my post of the other week.

Firstly, I no longer like the old syntax for method definitions. This looks much sexier to me now:

public class A {
    public void foo() {}
}

public class B {
    public void foo() {}
}

// variation one: 'extends' where 'throws' goes
public class AB extends A, B {
    public void foo() extends A {
        return A.super.foo();
    }
    public void foo() extends B {
        return B.super.foo();
    }
}

// variation two: almost explicit
public class AB2 extends A, B {
    public void foo() extends A {
        return super.foo();
    }

    public void foo() extends B {
        return super.foo();
    }
}

// variation three: implied
public class AB3 extends A, B {
    public void foo() {     // Overrides both A.foo() and B.foo()
        return super.foo(); // ..*
    }
}

// variation four: misc.
public class AB4 extends A, B {
    public void blah(int n) {
        A.super.foo();
        B.super.foo();
    }
}

This way AB3 can define a single behaviour for any call, irrespective of polymorphism.. although off-hand I can't think of a reason why you'd want to do so.

* note: I'm not sure what the behaviour of AB3.foo() will be. It could either:

  • default to A.super.foo() as the first parent in the extends list, or
  • call the super method that best matches the reference's apparent type; for example: B myB = new AB3(); myB.foo(); would call B.super.foo() because myAB is being treated as a B.

The second point would have to have a fallback behaviour in this situation: AB3 myAB3 = new AB3(); myAB3.foo(); because neither parent class matches the apparent type better. And we're still left with the common interface problem.

One other thing about the new syntax I'm not entirely sure of is whether these:

// variation three from the original post:
public class AB5 extends A, B {
    public void foo() {     // only extends A, as B is explicitly
                            // extended below..
        return super.foo();
    }

    public void foo() extends B {
        return super.foo();
    }
}

// .. and by extension:
public class AB6 extends A, B {
    public void foo() {     // only extends B, as A is explicitly
                            // extended below..
        return super.foo();
    }

    public void foo() extends A {
        return super.foo();
    }
}

// my least sure variation:
public class AB7 extends A, B {
    public void foo() {     // extends the first parent (A)
        return super.foo();
    }

    public void foo() {     // extends the second parent (B)
        return super.foo();
    }
}

..should be valid, warnings, or errors. Intuitively, I think AB5 should be fine, AB6 should be a warning that "the default overriding method does not override the default superclass" or something, and AB7 should be a "method already defined" error (maybe one that can be turned into a warning via javac parameters.)

That reminds me, there's a third thing Java should have:

3. A javac parameter for turning the "unreachable code" error into a warning. God I hate that stupid error! Why is: return;foo(); bad, but: if(true)return;foo(); good?! WTF!?!

When the parameter is flagged, the compiler should just drop unreachable statements. It's still valid, damnit!

See Also: Things Java Should Have
... Matty /<

]]>
Sun, 06 Jan 2008 00:00:00 +1100 bd2894e997884f0cfc442a5880ed4a33
Best Game Ever (2008-01-03) https://matthew.kerwin.net.au/blog/?a=20080103_best_game_ever Best Game Ever

The other day my friend bard came to visit. He brought around a PS2 game he had recently purchased, which I tried.

Today I purchased a copy of said game for myself. The game is We ♥ Katamari, and it is officially the best PlayStation 2 game ever.

The goal of We ♥ Katamari is to roll your little sticky ball around, gathering items that are smaller than your little sticky ball. As you gather things, your sticky ball grows. Starting out with paperclips and pencils, you can end up gathering buildings and, I have it on good authority, even continents. Today I rolled up several famous land marks, including l'Arc de Triomphe, the Disney castle, and Mt. Rushmore.

The game is obviously Japanese, with its bright, flashy graphics and insane soundtrack; but on the whole it is done very professionally and cleanly. Nothing is confusing or overwhelming about the whole setup. Loading screens are hidden behind an amusing monologue: the inane ramblings of the King of the Cosmos. (It sounds weird, but it's done quite well.)

Controls are fairly intuitive. You direct the ball by moving both analogue sticks (both up moves forward, one up one down spins, etc) and there are very few button presses during gameplay (L3+R3 performs a 180° spin — that's the only one.)

All that aside, and most importantly, the game is really fun. There's something satisfying about being bullied about by puppy dogs at the start of a level, then hearing them yelp as you roll them up after growing a bit; not to mention the rewarding screams of citizens as you crush them inside your giant ball of sticky death. Or the feeling of absolute power as you start rolling up cars and trucks, achieving an awesome momentum as you grow faster and faster, knowing that nothing can stand in your way!

Having played this game twice (for several hours each time — time flies when you're completely engrossed) I profer my opinion that We ♥ Katamari is the best PlayStation 2 game ever.

... Matty /<

]]>
Thu, 03 Jan 2008 00:00:00 +1100 9b4c43ebc57266cddd3d937ceb6aa904
Things Java Should Have (2007-12-11) https://matthew.kerwin.net.au/blog/?a=20071211_things_java_should_have Things Java Should Have

1. A new return type: this

A this method would be written like a void method, with no return parameters, and no need to explicitly call return. However when called the this method would actually return a reference to its "this" object. As such, this methods could never be static.

Consider the following example, where I'll abuse the builder pattern slightly, and use the existing String and StringBuilder classes to represent the concept of immutable and mutable strings, respectively.

public class String {
    public String append(String affix) {
        // return type "String" says that this method returns
        // a new object, which implies that this object *won't*
        // actually be modified.
    }
}

public class StringBuilder {
    public this append(String affix) {
        // return type "this" says that this method returns
        // a reference to this object, implying strongly that
        // this object *will* be modified.
    }

    public void clear() {
        // return type "void" says that this method doesn't
        // return anything, so can't be chained. Implies
        // very strongly that this object *will* be modified.
    }
}

The return type of this allows us to chain method calls (like we currently can with StringBuilder.append, which has a return type StringBuilder), but with the added advantage that just from the method signatures a coder would be able to infer certain side-effects of the methods without having to read any documentation. Currently it's not immediately apparent (although it's not hard to work out) that StringBuilder.append returns a reference to the actual StringBuilder object, not a whole new one.

2. Multiple inheritence by extending multiple abstract (or "instantiable") classes.

This one I haven't sorted out all the kinks in my mind. I'm still not entirely sure how to deal with polymorphism, but I have a feeling about what might work. Essentially, I propose the following syntax:

public class A {
    public void foo() {}
}

public class B {
    public void foo() {}
}

// variation one: explicit
public class AB extends A, B {
    public void A.foo() {
        return A.super.foo();
    }
    public void B.foo() {
        return B.super.foo();
    }
}

// variation two: almost explicit
public class AB2 extends A, B {
    public void A.foo() {
        return super.foo(); // A.super is implied since this method
    }                       //  overrides A.foo

    public void B.foo() {
        return super.foo(); // B.super is implied since this method
    }                       //  overrides B.foo
}

// variation three: implied
public class AB3 extends A, B {
    public void foo() {     // A.foo is implied as A is the first
                            //  parent in the 'extends..' list
                            //  which defines foo()
        return super.foo(); // A.super is implied since this method
    }                       //  overrides A.foo

    public void B.foo() {
        return super.foo(); // B.super is implied since this method
    }                       //  overrides B.foo
}

// variation four: misc.
public class AB4 extends A, B {
    public void blah(int n) {
        A.super.foo();      // this could be just "super.foo()", as
                            //  per variation three
        B.super.foo();
    }
}

I'm thinking that there'd have to be some fancy runtime stuff to ensure that A myA = new AB(); myA.foo(); executes the appropriate AB.A.foo() method. I really don't know how we'd deal with this:

public class A {
    public void foo() {}
}

public class B {
    public void foo() {}
}

public interface I {
    public void foo();
}

public class AB extends A, B, I {
    ...
}

I myI = new AB();
myI.foo();

Maybe defaulting to the first parent. I dunno. I haven't even mentioned disagreeing return types, either. Someone more clever than me can think of something, I'm sure.

Edit: check the addenda to this post. (2008-01-06)
... Matty /<

]]>
Tue, 11 Dec 2007 00:00:00 +1100 e99321b14e0ce478859ed7170bcee00e
NetBeans 6.0 (2007-12-07) https://matthew.kerwin.net.au/blog/?a=20071207_netbeans_6_0 NetBeans 6.0

NetBeans 6.0 is really slow compared to 5.5.1

I've been using it for a couple of weeks now at work, and while I like some of the things it's added, I wonder if most of it was worth the cost, considering how unresponsive it's become.

Things I like:

  • the "revert" action thingies on the diff panels (click the little blue arrows and red Xs near the changes to see what I mean)
  • the "local history" concept
  • the change indicators on the sidebar beside the scrollbar (it used to show errors and the current cursor position, now it's much more useful)

Things that are okay, but I could live without:

  • putting most likely matches at the top of the autocomplete menu
  • the bezier splines on the diff panels

Things that are mildly irritating:

  • The highlighter thingy that colours instances of whatever's under the cursor - the plugin by Sandip Chitale on which I believe it's based (and I used in 5.5.1) had a button on the toolbar to turn this feature on and off. I have a vim-ish colour theme (grey text on black) so a bright yellow highlight everywhere that can only be turned on or off by navigating the preferences dialogue is annoying - and I can't change the colour it uses! So I leave it switched off, and use the old fashioned (tedious, slow) right-click,find-usages which this feature was meant to replace.
  • if you have a large contiguous region of "changed" code, for example you just added a hundred lines or so, you can navigate to that section really easily by clicking on the green markers in the sidebar... but this will always take you to the very top of the contiguous region, even though in the sidebar it looks like discrete chunks which you should be able to jump to individually.
  • Alt+Shift+F (formerly "Fix imports," a beautiful piece of genius that would add the "import java.util.regex.Pattern"-type lines automagically for you) now performs the glorious "Format source" task. Also known as "rearrange all my code, change whitespace, braces and newlines, and make it generally unreadable." Because, gods know, coders couldn't possibly write neat code. Easy to fix, but annoying that I should have had to.

Things I don't like:

  • the several seconds' delay before most actions. In 5.5.1 I used Ctrl+Space the way I use Tab in a terminal - because it was quicker than typing out the whole word, and as an easy way to correct capitalisation. As a result I could spam out a line of java code while only actually typing maybe 1/4 or 1/3 of the characters and almost never needing to synchronise my shift key. Now it's quicker for me to type out the whole line, including the inevitable backspaces and corrections, than the wait the 2, 3, sometimes 5 or more seconds for the autocomplete menu to appear. Sure, it's neat when it's actually there, but usually I'm already a line and a half ahead by that stage.
  • memory usage. Within two days of using 6.0 I learned how to override the .conf file (create a file called ~/.netbeans/6.0/etc/netbeans.conf) to add the parameter -J-Xmx1024m so that stupid little red "no" icon would stop flashing away in the corner saying "I can't do anything, need more memory!"
  • the way as soon as you put close-parentheses on a method call, javadoc (Ctrl+Shift+Space), go-to-implementation (Ctrl+Click), and sometimes even auto-complete (Ctrl+Space) NO LONGER WORK, because it can't recognise the method signature - and the only way to get out of this situation is by throwing semicolons around or deleting code until it works again.
  • refactoring something in a common library, which then causes the whole IDE to lock up while it "compiles" every open project
  • using "Project" dependencies if, like we do at work, you have lots of projects in a mostly linear hierarchy, where the root libs are common to everything, and each subsequent tier is common to almost everything above it. So compiling a top-level project involves checking the root libs, then rechecking them for tier 1, then rechecking them and tier 1 for tier 2, then rechecking the root, tier 1, and tier 2 for tier 3, ... etc. If, like we recently did at work, you add another level of projects above the top, you actually double the time it takes to go through the dependency check before compiling the project - on every build! Relatedly, would it be so hard to have a "Clean just this project, not all the dependencies" menu item?

Yes, some of those things have been there since 5.5 and earlier. But they could have been fixed. Stuff that didn't need to be fixed was. I'll keep using it, though; I've gotten used to the quirks, the way I got used to 5.5.1's. There are other little things I like, too, which my coworkers (the ones still in 5.5.1) are jealous of. I can't think of any at the moment. They're all little things, which just go to show that NetBeans is actually written by people who use it.

... Matty /<

]]>
Fri, 07 Dec 2007 00:00:00 +1100 043250fbb34da382a5479cec38cbf36f