Categories
Front End Development Haxe Haxe Log Small Universe

Writing a framework: web application architectures I’m inspired by

  1. I’m going to write a web app framework (again) (maybe)
  2. Writing a framework: web application architectures I’m inspired by

A look at recurring architectural patterns I see in both front end and back end, that have potential to tie together into a full stack framework.

In my last post I said part of my reason for wanting to write another web framework was that I’ve been exposed to similar ideas in both the front end and back end, and wanted to experiment with an architecture that ties it all together.

In this post, I’m going to explore those: Unidirectional data flow, the elm architecture, CQRS and event sourcing. And then I’ll look at the common themes I see tying them together.

State, views, and one-directional data flow

Almost every popular front end architecture I’ve encountered recently shares an idea: you have data representing the current state of your page, and you use that state to render the view that the user can see and interact with.

If you want to change something on the page, you don’t update the page directly, you update the state, and that causes the view to update.

You might have heard this described in a few ways:

  • Data Down, Actions Up: the data flows down from the state to the view. And the actions come up from the view to modify the state.
  • Model, View, Controller: Your have a data “model” layer that holds information about the current state, and a “view” layer that knows how to update it, and a “controller” that does the communication in the middle.
  • Unidirectional data flow: You often hear this term in the React ecosystem. Data flows from the top of your view down into it the nested components. So a component only knows about the data passed into it, and nothing else. The data always flows downwards. How do you change the data then? As well as passing down the data from the state, we also pass down a function that can be used to update the state.
An illustration showing a circular flow of data and actions.  There is an arrow from “State” to “View” labelled “Data down” and from “View” back to “State” labelled “Actions Up”.
Actions Up, Data Down. Every application has “State” (data for the application) and “View” (what the user sees). The view is always rendered and updated from the current state. (We call this “Data down”). And the view can trigger actions to update the state. (We call this “Actions up”).

This pattern is used in frameworks as diverse as React and Ember and Elm. Why is it so common? Here’s some of the advantages it provides:

  • Each function in your code has one job: turn the state into a view, or to update the state in response to an action. This makes it nice and easy to wrap your head around an individual piece of code.
  • The functions don’t need to know about each other. If you have a “to-do list” app, and an action that adds a new to-do item – you don’t need to know the 3 different places it’ll show up in the UI and change them all – you just update the state. Likewise, if you want to add a new view, you don’t have to touch all the places that edit the “to-do” list, you can just look at the current state, however it came to be, and use that data to render your page.
  • It becomes easy to debug. If there’s a bug, you can check if the state data is correct. If the state data looks correct, then the bug will be in one of your view functions. If that state data looks wrong, then the bug is probably in one of your action functions that change the state.
  • It makes it easier to write tests. Your action functions test simple things: if our state looks like this, and we perform this action, then the state should now look like that. That’s easy to write a unit test for. And for your view functions, you can write tests that use mocked state data to test all the different ways your view might be rendered. You can even create “stories” with Storybook, or take visual snapshots, for quick visual tests of many different ways the UI looks.

The Elm architecture

Elm takes this concept to the extreme. By making a language and a framework that are tightly integrated, they can force you to follow the good advice from “data down, action up”.

The Elm architecture has four parts:

  • Model – the state of your application,
  • View – a way to turn your state into HTML,
  • Messages – a message triggered by the UI (like clicking a button) that has information about an action the user is requesting (like attempting to mark an item as complete)
  • Update – a way to update your state based on “messages”
An illustration titled "The Elm Architecture" showing a circular flow diagram between State, View, Html and Update. There is some psuedocode in each of "State" "View" "Html" and "Update" which is described in the caption.

The Elm Architecture.

State. The page has some current state that is used to render the view. We use the type system to make sure the structure of this is consistent. In this example, it might have a title “My work” and a list of tasks like “Plan week” and “Check Slack”

View. Render HTML based on the current state. Following on the example above, we might have a view function that receives our state, and calls functions to render a <h1> element and a <ul> with the list of tasks, and a form for adding new to-do items.

Html. We never manually edit HTML or the DOM, we just update state, which updates the view, and the framework checks what HTML needs changing. In the example above, this would be the rendered h1, ul and form HTML / DOM produced by the framework.

Update. The only type of events we trigger from the HTML view are strictly typed and exactly what our update function is expecting. When an action happens, the framework uses the update function to consider what the new state should be based on the previous state and the action that occurred. Following on the example, an “AddTodo” action might append a new item to the list. While the “CompleteTask” action might remove an item from the list. The only way to update the state is to use the update function, one action at a time. This makes state management easy to unit test and debug!

There is also the opportunity to interact with the outside world – things like a Backend API. Elm provides a way to do this from the Update function (which can in turn trigger new actions) but it doesn’t have strong opinions on what happens in the Backend API.

And Elm will enforce this. You can only update the view of the page via your “view” function. Your “view” function only has access to the model to decide what to render. The only actions your view can trigger are the list of messages you define. And you can only update the state in the model using your update function to change things as messages come through.

And Elm has a fantastic type system and compiler to make this all work really nicely together. To show how it works, imagine you have a “to-do list” and you have a button to mark an item as complete:

  • In your “view” function where you render the button, you can set an “onClick” event.
  • The “onClick” event will trigger a message that something has happened. You decide the names of the possible messages that can happen, so we might chose a name like “MarkToDoAsComplete”.
  • Because we have told Elm we have a message called “MarkToDoAsComplete”, it will force us to handle this code in our “update” function.
  • In our “update” function we write the code that updates the model, setting the current to-do as complete.
  • When a user views the page they see the button. When they click the button, the message is triggered, the update function is run, the model is updated with the to-do item now being marked as complete, and the view will update in response. By the time we did all the things the compiler asks, it all just worked.

The great thing about this is that Elm knows exactly what code is needed for all the pieces of your application to work. If you’re missing anything, it will give you a nice error message showing what to fix. This means that you never get runtime errors in your Elm code.

But even nicer than that, it means you have a great workflow:

  • You add your button, and a “MarkToDoAsComplete” message
  • The compiler tells you that message needs to be added to your list of app messages. You do that.
  • The compiler tells you that your “update” function needs to handle the message. You do that.
  • It now all works.

This “chase the compiler” workflow is what originally got me excited about the Elm language, not just the architecture – you can see Kevin Yank’s talk “Developer Happiness on the Front End with Elm” for a more detailed overview.

(As a bonus, if you do spot anything wrong, the strict framework for updating state based on actions, one at a time, allows powerful debugging tools like “time travel debugging”, where you can replay events one at a time to see their effect.)

Command Query Responsibility Separation (CQRS)

On the back end, we sometimes find a similar pattern to “data down, actions up”. It’s called “Command Query Responsibility Separation”. You separate the queries (data) and the commands (actions) into separate code paths, separate API endpoints, or even separate services.

If your back end uses an SQL database, you can think of the “queries” using SELECT statements, and the commands using INSERT or UPDATE statements.

An illustration titled "Command / Query Responsibility Separation CQRS" showing a circular flow diagram.

A Database is at the top, with an arrow labelled "SELECT FROM..." pointing to a section named "Query (Data Down)", which has an arrow to "Client UI", which has an arrow to "Command (Action Up)", which has an arrow labelled "INSERT INTO" pointing back to the database.
Command Query Responsibility Separation (CQRS) encourages splitting up the “queries” (ways of reading the state) from the “commands” (ways of modifying the state). This ends up with many of the same benefits as “data down, actions up”, but for back end endpoints or services.

We use separate code paths for Commands (writes) and Queries (reads). Rather than having an API return new data after a command, we have the client repeat the full query.

And you end up with similar advantages:

  • Each endpoint has one job.
  • The endpoints don’t need to know about each other.
  • It becomes easier to debug.
  • The command endpoints and the query endpoints can adopt different scaling strategies. For example caching can be applied to the “query” endpoints.

Event Sourcing

When we talked about the Elm architecture, we saw a front end framework with a strict way to update the current state: by processing one message at a time. Event sourcing brings a similar concept to our back end, and crucially, to our data and our “source of truth”.

It’s normal for the “source of truth” in a web application to be a database that represents the current state of all of your data. For a todo list, you might have a row for each todo item, and columns to set the text of the item, whether it is complete or not, and the order it appears in the list.

That table would be your source of truth.

A diagram titled "event sourcing" with two parts. One part is labelled "Source of truth: current state" and shows a traditional database table view with a row for each to-do task, and columns for "id", "task" and "complete".

The other part is labelled "Source of truth: events'. And shows a sequence of squares representing events: "Create Task", "Create Task", "Complete Task", "Create Task". Each event has associated data like an id or task text.

Event Sourcing – changing the source of truth. Traditionally in a web application the “source of truth” might be a database table that represents the current state of the application. In event sourcing, the source of truth is an event log: all of the actions that occurred, one at a time. We can use this event log to build up a view or the current state.

Event sourcing is about changing the “source of truth” to be the events that occurred. Rather than caring exactly which todo items currently exist, and if they are complete, we care about when a user created a task, or marked a task as complete, or changed the order of the tasks in a list. These are the “events”, and they are our “source” of truth.

And we can process them, one event at a time, to build up a view of the data (in event sourcing these are often called “projections” of the data). Some projections might look very similar to what we had before – a database table with a row for each todo item, a column for the item text, whether it is complete, and the order in the list.

The power of event sourcing is that we can also create other views of the data. Perhaps we want to create a trend line graph showing how many open tasks we’ve had over time. If our source of truth was the current state, we wouldn’t be able to tell you how many tasks you had open last week (or this week last year!) With event sourcing, we can go back over all of the events, and build new views of the data.

A diagram showing the flow of data in an event sourced application, from "Client Actions" to "Command Handler" to "Event Log" to "Projections" (we show two examples) to a view of the data (again with two examples).
Event Sourcing allows us to create different “projections” for different ways of viewing the data.

Client actions. We can start with actions that are triggered by the user. Event sourcing as a pattern has no strong opinions on how this is handled.

Command handler. A service takes the action coming in from the client (like ticking off a task) and decides if it is allowed. If it is, it adds it to the event log.

Event log. A log of all the events that have occurred. Often this is in a database table. In the diagram above, we show 4 events: “Create Task”, with id “1” and task “Draw Diagram”. Then “Create Task” with id “2” and task “Write post”. Then “Complete Task” with id “1”. (Meaning “Draw Diagram” is complete). Then “Create Task” with id “3” and task “Clean dishes”.

Here the diagram splits into two, because rather than just having the task list, we can process the events and generate other interesting data. These “projections” are different data to derived from the same events.

We have a “Your Tasks projection” that is a database table with “id”, “task” and “complete” columns. Based on the events, Task 1 “Draw diagram” is complete, while 2 “Write post” and 3 “Clean dishes” are not. This could be used to render a traditional to-do list UI.

And we have an “Open Tasks Graph projection” with two columns, “date” and “open tasks’. It shows how for any given date, we can see how many tasks were opened. We could use this t draw a trend line graph.

Similar to the front end with actions and state, this also opens up some powerful debugging options. We can replay the events and “time travel” to see exactly how our system responded, and look out for points where things may have gone astray.

Bringing it together: the common concepts I want my “Small Universe” framework to draw on

You probably spotted some similar themes running through the above architectures:

  • Keeping code to fetch data and code to process actions separate
  • Having a way to get the “current state” for a page from our data
  • Always rendering the pages based on that current state
  • Allowing the views to trigger actions or “events”
  • These events being tracked, and considered our source of truth
  • Responding to one event at a time to update our application data

Following these allows us to write code which is simpler – each function is focused on either fetching data for the current view, displaying the current view, or processing an action. That makes code easier to understand, easier to test, and simpler to debug.

Making sure our data is updated one action at a time also opens up potential for time travel debugging, which is an incredibly powerful feature for you as a developer when you’re investigating how something went wrong. It also leaves the door open for new features that you can build, being able to take full advantage of all the past user actions.

Finally, by strictly defining the shape of your state, and the set of actions (or “events” or “messages”) that are possible, you can have the framework and compiler do a lot of work for you, ensuring that if a button exists, it has an action, and the action updates the state, and the view reflects the updated state.

So you can find a very productive workflow where you start adding one new line of code for your feature, and the compiler will guide you all the way to completing the feature as valid code, and you can be relatively confident it’ll work.

So, for this “Small Universe” framework I’m starting, I am taking inspiration from these architectures to try build something that leads you to write code that is easy to write, understand, test and debug. Something that uses events as a source of truth to make it easy to build new features that process previous actions into features or views we hadn’t imagined up front. And something that leads to a happy and productive workflow, with the compiler able to provide ample assistance as you build out new features.

I’ll come back to describe the architecture I’m aiming for in a future post. But I hope this helps you understand the direction from which I’m approaching this project.

Have any questions? Things that weren’t clear? Ideas you want to share? I’d love to hear from you in the comments!

Categories
Haxe Small Universe

I’m going to write a web app framework (again) (maybe)

  1. I’m going to write a web app framework (again) (maybe)
  2. Writing a framework: web application architectures I’m inspired by
A photo from a children’s book. Text: But then I realized, what do they really know? This is my idea, I thought. No one knows it like I do. And it’s okay if it’s different, and weird, and maybe a little crazy.​ I decided to protect it, to care for it. I fed it good food. I worked with it, I played with it. But most of all, I gave it my attention.

This is my idea, I thought. No one knows it like I do. And it’s okay if it’s different, and weird, and maybe a little crazy. I decided to protect it, to care for it. I fed it good food. I worked with it, I played with it. But most of all, I gave it my attention.

“What do you do with an idea” by Kobi Yamaha and Mae Besom.

I’m thinking about writing a web framework. This wouldn’t be my first time.

Why?

  • Primarily, because I’ve got an idea, and want to explore it. The quote and photo above from “What Do You Do With An Idea” reminded me that creativity and inspiration are muscles we can train – the more we explore our ideas with curiosity, the more the ideas will keep coming.
  • I want to play with code more, try out things for the sake of curiosity and experimentation and be okay with it not necessarily building toward something as part of my day-job.
  • I’m a software engineer that enjoys building the thing to build the thing. At Culture Amp, I’m on the “Foundations” team that builds things like the design system and our common tooling, rather than working on the main products.
  • Every now and then I want to build a little side project, but get paralyzed – what should I build it with? None of the existing web frameworks I look at appeal to me – I want a single framework for front end and back-end, I want a language with a good compiler, and I want something I can grasp from front-to-back, if there is magic I want it to be magic I understand.
  • I’ve learned a lot since I last tried this. In particular, at Culture Amp I’ve been exposed to languages like Elm on the front end and concepts like Event Sourcing / CQRS on the back end. And they’re similar and interconnected and I want to see what it looks like to build a framework that builds on patterns from all of these.
  • I’ve enjoyed this kind of project in the past!

(Side note: I’m aware that being in a position where I have the time and money to indulge in this is a sign of incredible privilege… I’m still learning what it means to actively tear down the unfair systems that contribute to that privilege. There are more significant things I can invest my time in for certain. But I’ll save that for another blog post 😅)

What have I tried in the past?

  • I once was the main developer (including doing a ground-up rewrite) of a framework called Ufront. It loosely copied MVC.net on the backend, but could compile to several backend languages (thanks to Haxe). The killer feature was that you could re-use code – routing, controllers, views, models, validators – on the front end, and have seemless compiler help when calling backend APIs from the Front End.

    To this day I haven’t seen another framework attempt that level of back-end front-end integration, with the possible exception of Meteor. (Admittedly, I haven’t been looking for a while). I feel this tried to be too many things – being a generally useful backend, as well as a front-end integration layer, as well as an ORM, and Auth system, and more. At the end of the day, with the majority of the development coming from me, it didn’t have momentum for such an ambitious scope. I did build two useful education apps with it though!
  • I also attempted a more tightly scopoed project that never got off the ground, called “Small Universe“. I started this around the time I was interviewing for Culture Amp, and it was heavily influenced by Kevin Yank’s talk “Developer Happiness on the Front End With Elm“. (I now work with Kevin at Culture Amp). The idea was to have a clear data flow for a small “Universal Web App”. The page can trigger actions. Actions get processed on the back end. The back end can generate props for a page. Then the props are used to render a view. Basically, the Elm architecture but with an integrated back end / API layer.

    I liked this a lot, and built out quite a prototype, but haven’t touched it in over a year. I like this scope of “small, opinionated framework to give structure to a universal web app”.

    The first prototype I built integrated with React on the client, which I think I’d skip this time. I also think I’d go further with the data flow and push for event-sourcing (I didn’t know the terminology at the time, but I’d implemented CQRS without Event Sourcing).

    I also liked the name, and think I’ll reuse it! “Small Universe” spoke to it being a framework for “universal web apps”, where code is shared seamlessly between client and server. And also it being “small” – giving you all the building blocks for an entire app in a tight, coherent framework that is easy to build a mental model for.

So am I going to do this?

I think so! I’m interested in having a project, and “working out loud” with blog posts alongside PRs to explore the problems I’m trying to solve and the approaches I’m experimenting with.

I don’t think it’ll necessarily become anything – and there’s a good chance I’ll not follow through because life gets busy (my wife and I are expecting a second child next month 😅) but I’m interested in sharing my initial thoughts and seeing where it goes from there.

What am I optimizing for?

Scribbles from my iPad as I explored what I’m optimising for. (The section below is derived from this).
  • My own learning, curiosity and practice. (See “Why” above)
  • An Elm like workflow, but full stack. Elm has this great workflow where you can create a new button in your view. The compiler will ask you to define an action for the button. Once you define an action (like `onClick save`) the compiler will ask you to write an “update” handler for when that action occurs. When you do that you’ll write the code to handle the save. You start with the UI, and then the compiler guides you through every step needed to see the feature working. By the time the compiler has run out of things to say, your front end probably works. I want the compiler to provide that experience, with guidance, hints and safety to build features across the front and back end stacks.
  • Clear flow of data and logic. Every piece of logic in the app should have its place in the architecture, and it should be easy to find where something belongs. On the back end this looks like CQRS (Command Query Responsibility Separation) – having the code paths that fetch data for a page (Queries) be completely separate to the code paths that change data (Commands). On the front end this looks like separating out state management from the views – the page state should be parsed into a view function. There’s lots to dive into here, but I’ll save it for a future post.
  • Start small but keep options open. I want this for myself and side projects. I want something that can start small, and where the entire mental model can fit in my head. But which makes it easy to migrate to a more traditional framework, or a more advanced architecture, if the thing ever grows.
  • Keeping open the possibility for a host of technical nice-to-haves:
    • Event sourcing.
    • Offline client-side usage.
    • Multiple projections.
    • Server side rendering.
    • GDPR “right to be forgotten”.
    • Using Web Sockets for speedy interface updates.
    • Ability to have “branch previews” so you can see what a PR will look like.
  • And to call out some trade offs:
    • I’m not aiming for compatibility with React or other view layers. I think the idea I’m chasing handles data flow differently enough that it’s not worth trying to shoe-horn existing components.
    • I’m not aiming for micro services. For side projects I think a “marvelous monolith” is more sane. If the data is event sourced a future transition to micro services will be easier.
    • Not aiming to support multiple back end targets, or front ends. I’ll probably pick just one back end stack to focus on, and focus on the web (not native / mobile).
    • Not aiming for optimum performance. If I can write an API signature in a way that can be optimized and parallelized in future, I will, but I’ll probably do some naive implementations up front (such as updating all projections synchronously in the main thread).
    • Not aiming to be a general HTTP framework that can handle arbitrary request and response types. There are plenty of good tools for that job.

Let’s begin

So to start off, here’s where I’ll do my work:

Categories
Haxe

Lix: A step forward in dependency management for Haxe projects

Part 1: Haxelib gets painful

I’ve been using Haxe for a long while, and for about 2-3 years I was using Haxe full time, building web applications in Haxe, so I know how important managing your dependencies is, and I know how painful it was with Haxelib, especially if you had a lot of dependencies, a lot of projects, or needed to collaborate with people on different computers.

Haxelib is okay when you’re just installing one or two libraries, and they’re libraries with stable releases where you don’t change versions often, and if you don’t need to come back to your code after long gaps in time. Basically, haxelib is fine if you’re doing weekend hackathons or contests like Ludum Dare where your projects probably aren’t too complex, you’re not collaborating with too many other people, you’re using existing frameworks, and you don’t have to worry about if it will still work fine in 4 months time.  Otherwise, it can be quite painful.

I tried to help with Haxelib at one point in time (I still am in the top 4 contributors on Github, most of that was back in 2013 though), but it proved pretty unruley – even skilled developers were afraid of changing too much or refactoring in a way that might break things for thousands of developers. And some changes were impossible to make without first changing the Haxe compiler. So it’s largely sat in the “too hard” basket and has not had many meaningful improvement since it first became it’s own project in 2013.

(No offense to anyone who has been working on it – you are a braver soul than I! But I think we all agree it’s not as good as it needs to be.)

Since mid 2016, I have been working in other jobs where I don’t use Haxe full time, instead spending more time with JS: using tools like NPM, Yarn, Webpack. And they’re certainly not perfect when it comes to dependency management, but there are a few things that they do right (Yarn especially).

Part 2: What the JS ecosystem gets right.

In Node JS land (and eventually normal JS land), there was a package manager called NPM – Node Package Manager. It had a registry of packages you could install. It would also let you install a package from Github or somewhere else. The basic things.

Here’s what I think it did right:

  • Used a standard format (package.json) to describe which packages a project uses.
  • Put all of the libraries in a standard location (node_modules/${my_cool_lib}/)
  • NodeJS didn’t care if you used NPM or not. As long as your stuff was in node_modules, it would be happy.

Why was this a good move? Because it allowed some talented people to build a competitor to NPM, called Yarn. By having simple expectations, you can have two competing package managers, and innovation can happen. Woo!

Yarn is what I use at work on a big project with 119 dependencies (and about 1000 sub-depedencies). Here’s what yarn did right:

  • Reproducible builds. While package.json has information about which version I want (say, React 16.* or above), Yarn would keep information in a file called yarn.lock which says exactly which version I ended up using (say, React 16.0.1). This was when my friend joins the project and tries to install things she won’t accidentally end up on a newer or older version than me – Yarn makes sure we’re all using exactly the same version, and all of our dependencies and sub-dependencies are also exactly the same.
  • A global cache. When Yarn came out, it was several times faster than NPM on our project because it kept a cache of dependencies and was able to resolve them quickly when switching between projects and branches. NPM has caught up now – but that’s the benefit of competition!

Part 3: Introducing lix (and its friends: switchx and haxeshim)

In 2015 I remember chatting to my friend Juraj Kirchheim (also one of the key contributors who just kind of gave up) about what an alternative might be, and he described something that sounded great, a futuristic utopian alternative to haxelib.

2 years later, and it turns out, it’s been built! And it’s called “lix”.

(What’s with the name? I’m guessing it is short for “LIbraries in haXe”, a leftover from when every Haxe project needed an X in it for cool-ness, and Haxe was spelt as haXe. That, and the lix.pm domain name must have been available).

Lix also depends on two other projects: haxeshim and switchx. The names aren’t super obvious, so here is my understanding of how it all works:

  • Haxe Shim intercepts calls to Haxe and does some magic. The Haxe compiler on it’s own explicitly calls haxelib, so you literally can’t replace haxelib without intercepting all calls to the compiler and getting rid of -lib arguments. So haxeshim is a shim that intercepts Haxe calls and sorts out -lib arguments so that haxelib is never needed.

    As a bonus, it also supports switching to the right version of Haxe for the current project. But for that, we also need “switchx”.

  • SwitchX lets you pick the Haxe version you need for your project, and automatically switches Haxe versions for whatever project you’re in. If you change between project A, on Haxe 3.4.3, and project B, in a different folder and running Haxe 4, it will always use the correct one.

    How?

    When you start a project you run switchx scope create. This makes a .haxerc file which says that this folder is a specific project, or “scope”, and should use the Haxe version defined in the .haxerc file.

    How do you change the version?

    You run switchx use latestor switchx use stable or switchx use nightly or switchx use 3.4.3 etc. It lets you instantly switch between different versions, and for the correct version to always be used while you’re in your project folder.

    Nice!

  • Lix is a package manager that you use to install packages. It is made to work with Haxe Shim, and creates a “haxe_libraries” folder, with a new hxml file for each dependency you install. It’s super fast because it uses a global cache (like yarn) and it makes sure you always have the correct version installed (like yarn). It supports installing dependencies from Haxelib, Github, Gitlab or HTTP (zip file). Anytime you update or change a dependency, one of the haxe_libraries/*.hxml files will be updated, you commit this change to Git, and it will update for all of your coworkers as well. Magic.

These tools are (for now) built on top of NodeJS, so you can install them with NPM or Yarn.

If you want to install each of these, you basically run these commands (warning: these will replace your current Haxe installation):

# Install all 3 tools and make their commands available.
yarn global add haxeshim switchx lix.pm

# Create a ".haxerc" in the current directory, informing haxeshim that
# this project should use a specific version of Haxe and specific
# `haxe_libraries` dependencies
switchx scope create

# Use the latest stable version of Haxe in this project.
switchx install stable

Part 4: What lix can do that haxelib cannot do (well).

With this setup, here’s what I can do that I couldn’t do before:

Be certain that I always have the exact right version installed, even if the project is being set up on someone else’s machine. Even if I pulled from a custom branch, using something like lix install github:haxetink/tink_web#pure (install the latest version of tink_web from the “pure” branch), when I run this on a different machine, it will use not only the same branch, but the exact same commit that it used on my machine, so we will be compiling the exact same code.

Easily get up and running on a machine where they don’t even have haxe installed. I tried this today – took a project on Linux, and set up its dependencies in Lix. It used a combination of Haxelib, Github, Gitlab, and custom branches. It was a nightmare with Haxelib. I also added haxeshim, switchx and lix.pm as “devDependencies” so they would be installed locally when I ran yarn intall. I opened a Windows machine that had Git installed, but not haxe, cloned the repo, and yarn install. It installed all of the yarn dependencies, including haxeshim, switchx, and lix, and then running lix download installed all of the correct “haxe_libraries”, and then everything compiled. Amazing!

Know if I’ve changed a dependency Today I was working on a change for haxe-react. In the past I would have used haxelib dev react /my/path/to/react-fork/. Now I edit haxe_libraries/react.hxml and change the class path to point to the folder my fork lives in. The great thing about doing this is, Git notices that I’ve changed it. And so when I go to commit the work on my project, git lets me know I’ve got a change to “react.hxml”, I’ve changed that dependency. In this case, I knew what to do: push my fork to Github, and then run lix install gh:jasononeil/haxe-react#react16 to get Lix to properly register my fork in a way that will work with my project going forward. I then commit the change, and people who use my project will get the up-to-date fork.

Start a competing package manager The great thing about all of this, is that “lix” has some great features, but if I want to write better ones, I can. Because of the way “haxeshim” just expects dependencies to haxe a “haxe_libraries/*.hxml” file, I could write my own package manager, that does things in my own way, and just places the right hxml file in the right place, and I’m good to go. This makes it possible to have multiple, competing package managers. Or even multiple, co-operating package managers.

Part 5: Vote on the future

So, I think Lix has learnt from a lot of what has gone “right” in the NodeJS ecosystem, and built a great tool for the Haxe ecosystem. I love it, and will definitely be using it in my Haxe projects going forward.

The question is, do we really need “haxeshim” and “switchx” and other such tools just in order to have a competing package manager? For now sadly, because of the way haxe and haxelibare tied at the hip, you do need a hack like this. But there’s a discussion to change that. (See here and here).

If you care about Haxe projects having maintainable dependency management, you can help by voting up comments in a discussion that’s happening right now. Here are the comments that I think will help Haxe support something like Lix, and more competing package managers, as first class citizen going forward. Feel free to upvote with a thumbs up emoji:

https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333298948https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333299543https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333302625https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333311522https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333323976

And two of my own comments

https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333367950https://github.com/HaxeFoundation/haxe-evolution/issues/30#issuecomment-333368097

Feel free to have a look and contribute to the discussion. For now though – if you don’t mind installing haxeshim and switchx, there is a very good solution for managing your haxelibs and dependencies in a reliable, consistent, but still flexible way. And it’s called Lix.

Categories
Haxe

Juggling haxelibs between multiple projects

Once you have more than one project you’re building in Haxe, you tend to run into situations where you use different versions of dependencies.  Often you can get away with using the latest version on every project, but sometimes there are compatibility breaks, and you need different projects to use different versions.

There is a work-in-progress issue for Haxelib to get support for per-project repositories.  Until that is finished, here is what I do:

cd ~/workspace/project1/

mkdir haxelibs

haxelib setup haxelibs

haxelib install all

And then when I switch between projects:

cd ~/workspace/project2/

haxelib setup haxelibs

What this does:

  • Switch to your current project
  • Create a folder to store all of your haxelibs for this project in
  • Set haxelib to use that folder (and when I switch to a different project, I’ll use a different local folder).
  • Install all the dependencies for this project.

Doing this means that each project can have it’s own dependencies, and upgrading a library in one project doesn’t break the compile on another project.

Hopefully that helps someone else, and hopefully the built in support comes soon!

Categories
Haxe

Requiring a Class argument to have a certain constructor signiature

Haxe has a very powerful reflection API, allowing you to create objects of a class, which you don’t know until runtime.

var u:User = Type.createInstance(User, [“jason”,”mypass”]);
u.save();

Sometimes, you have a method where you want to create a class, but which class is created is specified by the user:

function createUser( cls:Class<User>, u:String, p:String ) {

var u:User = Type.createInstance( cls, [u,p]);
u.save();

}
createUser( StaffMember, “jack”, “mypass” );
createUser( Subscriber, “aaron”, “mypass” );

So far so good.  But what if we have a 3rd user class, “Moderator”, that actually has a constructor that requires 3 arguments, not just the username and password.

createUser( Moderator, “bernadette”, “mypass” );

This compiles okay, but will fail at runtime – it tries to call the constructor for Moderator with 2 arguments, but 3 are required.

My first thought was, can we use an interface and specify the constructor:

interface IUser {

public function new( user:String, pass:String ):Void

}

Sadly in Haxe, an interface cannot define the constructor.  I’m pretty sure the reason for this is to avoid you creating an object with no idea which implementation you are using.  Now that would work for reflection, but wouldn’t make sense for normal object oriented programming:

function createUser( cls:Class<IUser>, u:String, p:String ) {

var u:IUser = new cls(u,p); // What implementation does this use?

}

So it can’t be interfaces… what does work?  Typedefs:

typedef ConstructableUser = {
function new( u:String, p:String ):Void,
function save():Void
}

And then we can use it like so:

function createUser( cls:Class<ConstructableUser>, u:String, p:String ) {

var u:ConstructableUser = Type.createInstance( cls, [u,p]);
u.save();

}
createUser( StaffMember, “jack”, “mypass” );
createUser( Subscriber, “aaron”, “mypass” );
createUser( Moderator, “bernadette”, “mypass” ); // ERROR – Moderator should be ConstructableUser

In honesty I was surprised that “Class<SomeTypedef>” worked, but I’m glad it does.  It provides a good mix of compile time safety and runtime reflection.  Go Haxe!

Categories
Haxe

Accept either String or Int, without resorting to Dynamic

A quick code sample:

`AcceptEither`, a way to accept either one type or another, without resorting to “Dynamic”, and still have the compiler type-check everything and make sure you correctly handle every situation.

Try it on try.haxe.org

The next step would be having Accept2Types, Accept3Types etc, and have a macro automatically build the code for whichever option you ask for.

Categories
Haxe

Super Quick Haxe Abstracts Example

I just posted a quick gist to the Haxe mailing list showing how one way that abstracts work.

They’re a great way to wrap a native API object (in this case, js.html.AnchorElement) without having to create a new wrapper object every single time. Which means they’re great for performance, the end result code looks clean, and thanks to some of the other abstract magic (implicit casts, operator overloading etc) there is a lot of cool things you can do.

Have a look at the sample, read the Haxe manual, and let me know what you think or if you have questions :)

Categories
Haxe

Web.cacheModule and Expiring DB connections

The Story

A while ago I posted about neko.web.cacheModule, a way to make your site run dramatically faster on mod_neko or mod_tora. After spending some time making ufront compatible with this mode of running, I was excited to deploy my app to our schools with the increased performance.

I deployed, ran some tests, seemed happy, went home and slept well. And woke up the next morning to a bunch of 501 errors.

The Error

The error message was coming from the MySQL connection: “Failed to send packet”.

At first I just disabled caching, got everyone back online, and then went about trying to isolate the issue. It took a while before I could reproduce it and pinpoint it. I figured it was to do with my DB connection staying open between requests, thanks to the new module caching.

A google search showed only one Haxe related result – someone on IRC that mentioned when they sent too many SQL queries it sometimes bombed out with this error message. Perhaps leaving it open eventually overran some buffer and caused it to stop working? Turns out this was not the case, I used `ab` (Apache Benchmark tool) to query a page 100,000 times and still didn’t see the error.

Eventually I realised it was to do with the MySQL Server dropping the connection after a period of inactivity. The `wait_timeout` variable was set to 28800 by default: so after 8 hours of inactivity.  So long enough that I didn’t notice it the night before, but short enough that the timeout occured overnight while all the staff were at home or asleep… So the MySQL server dropped the connection, and my Haxe code did not know to reconnect it. Whoops.

The Solution

I looked at Nicolas’s hxwiki source code, which runs the current Haxe site for some inspiration on the proper way to approach this for `neko.Web.cacheModule`. His solution: use http://api.haxe.org/sys/db/Transaction.html#main. By the looks of it, this will wrap your entire request in an SQL transaction, and should an error be thrown, it will rollback the transaction. Beyond that, it will close the DB connection after each request. So we have a unique DB connection for each request, and close it as soon as the request is done.

My source code looks like this:

class Server
{
  static var ufApp:UfrontApplication;

  static function main() {
    #if (neko && !debug) neko.Web.cacheModule(main); #end
    
    // Wrap all my execution inside a transaction
    sys.db.Transaction.main( Mysql.connect(Config.db), function() {
      init();
      ufApp.execute();
    });
  }

  static function init() {
    // If cacheModule is working, this will only run once
    if (ufApp==null) {
      UFAdminController.addModule( "db", "Database", new DBAdminController() );
      ufApp = new UfrontApplication({
        dispatchConfig: Dispatch.make( new Routes() ),
        remotingContext: Api,
        urlRewrite: true,
        logFile: "log/ufront.log"
      });
    }
  }
}
Categories
Haxe Haxe Log

My Haxe Log: Week #2

This week:

  • My token one day a week I continued on my Node-Webkit project.  This time I made externs for Kue (externs, which appear to be working) and FFMpeg (externs, not functional just yet).  Still enjoying working with Node-Webkit, and with the Node-API library especially.  Sad I didn’t get to make more progress on it this week.
  •  Ufront:
      • Make tracing / logging work reliably between multiple requests. After enabling neko.Web.cacheModule(), I began to find areas where Ufront was not very multiple-request-friendly. These would have surfaced later with a port to Client JS or Node JS, but it’s good to find them now.One problem was that our tracing and logging modules were behaving as if there was only one request at a time. This could result in a trace message for one request ending up being output to somebody else’s request, which is obviously bad.

        The problem is a tricky one, as trace() always translates to haxe.Log.trace(), and you with Ufront’s multiple-request-at-a-time design, you can’t know which request is the current one from a static method. If I think of a clever way to do it, possibly involving cookies and sessions, then I might include a HttpContext.getCurrentContext() static method. This would probably have to be implemented separately for each supported platform.

        The solution for now, however, was to not keep track of log messages in the TraceModule, but in the HttpContext. Then on the onLogRequest event, the trace modules get access to the log messages for the current context, and can output them to the browser, to a file, or whichever they choose.

        The downside is that you have to use httpContext.ufTrace() rather than trace(). I added a shortcut for this in both ufront.web.Controller and ufront.remoting.RemotingApiClass, so that in your controllers or APIs you can call uftrace() and it will be associated with the current request. There is also ufLog, ufWarn and ufError.

        I also made RemotingModule work similarly with tracing and logging – so logs go to both the log file and the remoting call to the browser.

      • Fix logging in ErrorModule. One of the things that made debugging the new ufront really hard was that when there was an Error, the ErrorModule displayed, but the trace messages did not get sent to the browser or the log file. I did a bit of code cleanup and got this working now.
      • Fixed File Sessions / EasyAuth. Once able to get my traces and logs working more consistently, I was able to debug and fix the remaining issues in FileSession, so now EasyAuth is working reliably, which is great.
      • Added Login / Logout for UF-Admin. With UF-Admin, I added a login screen and logout, that works with EasyAuth for now. I guess I will make it pluggable later… For now though it means you can set up a simple website, not worry about auth for the front end, but have the backend password protected. If you use EasyAuth for your website / app, the same session will work on the ufadmin page.
      • Created uf-content for app-generated files. I moved all app-generated files (sessions, logs, temp files etc) into a folder called “uf-content”. Then I made this configurable, and relative to httpContext.request.scriptDirectory. You can configure it by changing the contentDirectory option in your UfrontConfiguration. This will make it easier when deploying, we can have instructions to make that single directory writeable but not accessible via the web, and then everything that requires FileSystem access can work reliably from there.
      • Pushed new versions of the libraries. Now that the basics are working, I pushed new versions of the libraries to Haxelib. They are marked as ufront-* with version 1.0.0-beta.1. From here it will be easy to update them individually and move towards a final release.
      • Demo Blog App. To demonstrate the basics of how it works, and a kind of “best practices” for project structure, I created a demo app, and thought I would start with a blog. I started, and the basic setup is there, including the config structure and each of the controller actions, and the “ufadmin” integration. But it’s not working just yet, needs more work.
      • Identified Hair website. I have a website for a friend’s small business that I’ve been procrastinating working on for a long time. On Saturday I finally got started on it, and set up the basic project and routes in Ufront. In about 4 hours I managed to get the project set up, all the controllers / routes working, all the content in place and a basic responsive design with CSS positioning working. All the data is either HTML, Markdown or Database Models (which get inserted into views). Once I’ve got their branding/graphics included, I’ll use ufront to provide a basic way to change data in their database. And then if they’re lucky, I might look at doing some Facebook integration to show their photo galleries on the site.
Categories
Haxe Haxe Log

My Haxe Log: Week #1

Hello! For a reason I can’t comprehend, this page is the most visited page on my blog. If you’re looking for information about logging in Haxe, the “Logging and Tracing” page in the manual is a good start.

If you can be bothered, leave a comment and let me know what you’re looking for or how you came to be here. I’d love to know!

Jason.

Every week as part of my work and as part of my free time I get to work on Haxe code, and a lot of that is contributing to libraries, code, blog posts etc.  Yesterday was one of those frustrating days where I told someone I’d finish a demo ufront app and show them how it works, but I just ran into problem after problem and didn’t get it done, and was feeling pretty crap about it.

After chatting it out I looked back at my week and realised: I have done alot.  So I thought I should start keeping a log of what I’ve been working on – mostly for my own sake, so I can be encouraged by the progress I have made, even if I haven’t finished stuff yet.  But also in case anything I’m working on sparks interest or discussion – it’s cool to have people know what I’m up to.

So I’d like to start a weekly log.  It may be one of those things I do once and never again, or it may be something I make regular: but there’s no harm in doing it once.

So here we go, my first log.  In this case, it’s not just this week, some of it requires me to go back further to include things I’ve been working on, so it’s a pretty massive list:

  • Node Webkit: On Monday’s I work at Vose Seminary, a tertiary college, and I help them get their online / distance education stuff going.  Editing videos, setting up online Learning Management Systems etc.  I have a bunch of command line utils that make the video editing / exporting / transcoding / uploading process easier, but I want to get these into graphics so other staff can use it.  Originally I was thinking of using OpenFL / StablexUI.  I’m far more comfortable with the JS / Browser API than the Flash API however, and so Node-Webkit looked appealing.  On Monday I made my first Haxe-NodeJS project in over a year, using Clement’s new Node-API repo.  It’s beautiful to work with, and within an hour and a half I had written some externs and had my first “hello-world” Node-Webkit app.  I’ll be working on it again this coming Monday.
  • neko.Web.cacheModule: I discovered a way to get a significant speed-up in your web-apps.  I wrote a blog post about it.
  • Ufront: I’ve done a lot of work on Ufront this week.  After my talk at WWXthis year, I had some good chats with people and basically decided I was going to undertake a major refactor of ufront.  I’m almost done!  Things I’ve been working on this week (and the last several weeks, since it all ties in together):
    • Extending haxe.web.Dispatch (which itself required a pull request) to be subclassed, and allowing you to 1) execute the ‘dispatch’ and ‘executeAction’ steps separately and 2) allow returning a result, so that you can get the result of your dispatch methods.  This works much nicer with Ufront’s event based processing, and allows for better unit testing / module integration etc.  The next step is allowing dispatch to have asynchronous handlers (for Browser JS and Node JS).  I began thinking through how to implement this also.
    • After discovering neko.Web.cacheModule, I realised that it had many implications for Ufront.  Basically: You can use static properties for anything that is generic to the whole application, but you cannot use it for anything specific to a request.  This led to several things breaking – but also the opportunity for a much better (and very optimised) design.
    • IHttpSessionState, FileSession: the first thing that was broken was the FileSession module.  The neko version was implemented entirely using static methods, which led to some pretty broken behaviour once caching between requests was introduced.  In the end I re-worked the interface “IHttpSessionState” to be fairly minimal, and was extended by the “IHttpSessionStateSync” and “IHttpSessionStateAsync” interfaces, so that we can begin to cater for Async platforms.  I then wrote a fresh FileSession implementation that uses cookies and flat-files, and should work across both PHP and Neko (and in Future, Java/C#).  The JS target would need a FileSessionAsync implementation.
    • IAuthHandler / EasyAuth: At the conference I talked about how I had an EasyAuth library that implemented a basic User – Group – Permission model.  At the time, this also was implemented with Static methods.  Now I have created a generic interface (IAuthHandler) so that if someone comes up with an auth system other than EasyAuth, it can be compatible.  I also reworked EasyAuth to be able to work with different a) IHttpSessionState implementations and b) different IAuthAdapter’s – basically, this is an interface that just has a single method: `authenticate()`.  And it tells you if the user is logged in or not.  EasyAuth by default uses EasyAuthDBAuthAdapter, which compares a username and password against those in the database.  You could also implement something that uses OpenID, or a social media logon, or LDAP, or anything.  All this work trying to make it generic enough that different implementations can co-exist I think will definitely pay off, but for now it helps to have a well thought out API for EasyAuth :)
    • YesBoss: Sometimes you don’t want to worry about authentication.  Ufront has the ability to create a “tasks.n” command line file, which runs tasks through a Command Line Interface, rather than over the web.  When doing this, you kind of want to assume that if someone has access to run arbitrary shell commands, they’re allowed to do what they want with your app.  So now that I have a generic interface for checking authentication, I created the “YesBossAuthHandler” – a simple class that can be used wherever an authentication system is needed, but any permission check it always lets you pass.  You’re the boss, after all.
    • Dependency Injection: A while ago, I was having trouble understanding the need for Dependency Injection.  Ufront has now helped me see the need for it.  In the first app I started making with the “new” ufront, I wanted to write unit tests.  I needed to be able to jump to a piece of code – say, a method on a controller – and test it as if it was in a real request, but using a fake request.  Dependency injection was the answer, and so in that project I started using Minject.  This week, realising I had to stop using statics and singletons in things like sessions and auth handling, I needed a way to get hold of the right objects, and dependency injection was the answer.  I’ve now added it as standard in Ufront.  There is an `appInjector`, which defines things that should be injected everywhere (modules, controllers, APIs etc).  For example, injecting app configuration or a caching module or an analytics API.  Then there is the dispatchInjector, which is used to inject things into controllers, and the remotingInjector, which is used to inject things into APIs during remoting calls.  You can define things you want to make available at your app entry point (or your unit test entry point, or your standalone task runner entry point), and they will be available when you need them.  (As a side note, I now also have some great tools for mocking requests and HttpContexts using Mockatoo).
    • Tracing: Ufront uses Trace Modules.  By default it comes with two: TraceToBrowser and TraceToFile.  Both are useful, but I hadn’t anticipated some problems with the way they were designed.  In the ufront stack, modules exist at the HttpApplication level, not at the HttpRequest level.  On PHP (or uncached neko), there is little difference.  Once you introduce caching, or move to a platform like NodeJS – this becomes a dangerous assumption.  Your traces could end up displaying on somebody else’s request.  In light of this, I have implemented a way of keeping track of trace messages in the HttpContext.  My idea was to then have the Controller and RemotingApiClass have a trace() method, which would use the HttpContext’s queue.  Sadly, an instance `trace()` method currently does not override the global `haxe.Log.trace()`, so unless we can get that fixed (I’m chatting with Simon about it on IRC), it might be better to use a different name, like `uftrace()`.  For now, I’ve also made a way for TraceToBrowser to try guess the current HttpContext, but if multiple requests are executing simultaneously this might break.  I’m still not sure what the best solution is here.
    • Error Handling: I tried to improve the error handling in HttpApplication.  It was quite confusing and sometimes resulted in recursive calls through the error stack.  I also tried to improve the visual appearance of the error page.
    • Configuration / Defaults:The UfrontApplication constructor was getting absurd, with something like 6 optional parameters.  I’ve moved instead to having a `UfrontConfiguration` typedef, with all of the parameters, and you can supply, all, some or none of the parameters, and fall-backs will be used if needed.  This also improves the appearance of the code from:new UfrontApplication( true, “log.txt”, Dispatch.make(new Routes()) );

      to

      new UfrontApplication({
      urlRewrite: true,
      dispatchConf: Dispatch.make( new Routes() ),
      logFile: “log.txt”
      });

    • More ideas: last night I had trouble getting to sleep.  Too many ideas.  I sent myself 6 emails (yes 6) all containing new ideas for Ufront.  I’ll put them on the Ufront Trello Board soon to keep track of them.  The ideas were about Templating (similar abstractions and interfaces I have here, as well as ways of optimising them using caching / macros), an analytics module, a request caching module and setting up EasyAuth to work not only for global permissions (CanAccessAdminArea), but also for item-specific permissions: do you have permission to edit this blog post?
    • NodeJS / ClientJS: after using NodeJS earlier in the week, I had some email conversations with both Clement and Eric about using Ufront on NodeJS.  After this week it’s becoming a lot more obvious how this would work, and I’m getting close.  The main remaining task is to support asynchronous calls in these 3 things: Dispatch action execution calls, HttpRemotingConnection calls, and database calls – bringing some of the DB Macros magic to async connections.  But it’s looking possibly now, where as it looked very difficult only 3 months ago.
  • CompileTime: I added a simple CompileTime.interpolateFile() macro.  It basically reads the contents of the file at macro time, and inserts it directly into the code, but it inserts it using String Interpolation, as if you had used single quotes.  This means you can insert basic variables or function calls, and they will all go in.  It’s like a super-quick and super-basic poor-man’s templating system.  I’m already using it for Ufront’s default error page and default controller page.
  • Detox: this one wasn’t this week, but a couple of weeks ago.  I am working on refactoring my Detox (DOM / Xml Manipulation) library to use Abstracts.  It will make for a much more consistent API, better performance, and some cool things, like auto-casting strings to DOM elements:”div.content”.find().append( “<h1>My Content</h1>” );
  • My Work Project: Over the last two weeks I’ve updated SMS (my School Management System project, the main app I’ve been working on) to use the new Ufront.  This is the reason I’ve been finding so much that needs to be updated, especially trying to get my app to work with “ufront.Web.cacheModule”.
Categories
Haxe

neko.Web.cacheModule()

Until now I haven’t had to worry much about the speed of sites made using Haxe / Ufront,  – none of the sites or apps I’ve made have anywhere near the volume for it to be a problem, and the general performance was fast enough that no one asked questions. But I’m going to soon be a part of building the new Haxe website, which will have significant volume.

So I ran some benchmarks using ab (Apache’s benchmarking tool), and wasn’t initially happy with the results. They were okay, but not significantly faster than your average PHP framework. Maybe I would have to look at mod_tora or NodeJS for deployment.

Then I remembered something: a single line of code you can add that vastly increases the speed: neko.Web.cacheModule(main).

Benchmarks

Here is some super dumb sample code:

class Server {
  static var staticInt = 0;
  static function main() {
    #if neko
      neko.Web.cacheModule(main); // comment out to test the difference
    #end 
    var localInt = 0; 
    trace ( ++staticInt ); 
    trace ( ++localInt ); 
  } 
} 

And I am testing with this command:

ab -n 1000 -c 20 http://localhost/ 

Here are my results (in requests/second on my laptop):

  • Apache/mod_php (no cache): 161.89
  • NekoTools server: 687.49
  • Apache/mod_neko (no cache): 1054.70
  • Apache/mod_tora (no cache): 745.94
  • Apache/mod_neko (cacheModule): 3516.04
  • Apache/mod_tora (cacheModule): 2185.30

First up: I assume mod_tora has advantages on sites that use more memory, but a dummy sample like this is more overhead than it’s worth.

Second, and related: I know these tests are almost worthless, we really need to be testing a real app, with file access and template processing and database calls.

Let’s do that, same command, same benchmark parameters:

  • Apache/mod_php (no cache): 3.6 (ouch!)
  • NekoTools server: 20.11
  • Apache/mod_neko (no cache): 48.74
  • Apache/mod_tora (no cache): 33.29
  • Apache/mod_neko (cacheModule): 351.42
  • Apache/mod_tora (cacheModule): 402.76

(Note: PHP has similar caching, using modules like PHP-APC. I’m not experienced setting these up however, and am happy with the neko performances I’m seeing so I won’t investigate further)

Conclusions:

  • the biggest speed up (in my case) seems to come from cacheModule(), not mod_tora. I believe once memory usage increases significantly, tora brings advantages in that arena, and so will be faster due to less garbage collection.
  • this could be made faster, my app currently has very little optimisation:
    • the template system uses Xml, which I assume isn’t very fast.
    • a database connection is required for every request
    • there is no caching (memcached, redis etc)
    • I think I have some terribly ineffecient database queries that I’m sure I could optimise
  • Ufront targeting Haxe/PHP is not very fast out-of-the-box. I’m sure you could optimise it, but it’s not there yet.
  • This is running on my laptop, not a fast server. Then again, my laptop may be faster than a low end server, not sure.

Usage

So, how does it work?

#if neko neko.Web.cacheModule( main ); #end 

The conditional compilation (#if neko and #end) is just there so that you can still compile to other targets without getting errors. The cacheModule function has the following documentation:

Set the main entry point function used to handle requests.
Setting it back to null will disable code caching.

By entry point, it is usually going to mean the main() function that is called when your code first runs. So when the docs ask for a function to use as the entry point, I just use main, meaning, the static function main() that I am currently in.

I’m unsure of the impact of having multiple “.n” files or a different entry point.

The cache is reset whenever the file timestamp changes: so when you re-compile, or when you load a new “.n” file in place.

If you wanted to manually disable the cache for some reason, you use cacheModule(null). I’m not sure what the use case is for this though… why disable the cache?

Gotchas (Static variable traps with cacheModule)

The biggest gotcha is that static variables persist in your module. They are initialized just once, which is a big part of the speed increase. Let’s look at the example code I posted before:

class Server {
  static var staticInt = 0;
  static function main() {
    #if neko
      neko.Web.cacheModule(main); // comment out to test the difference
    #end 
    var localInt = 0; 
    trace ( ++staticInt ); 
    trace ( ++localInt ); 
  } 
} 

With caching disabled, both trace statements will print “0” every time. With caching enabled, the staticInt variable does not get reset – it initializes at 0, and then every single page load will continue to increment it, it will go up and up and up.

What does this mean practically:

  • If you want to cache stuff, put it in a static variable. For example:
    • Database connections: store them in a static variable and the connection will persist.
    • Templates: read from disk once, store them in a static variable
    • App Config, especially if you’re passing JSON or Xml, put it in a static and it stays cached.
  • Things which should be unique to a request, don’t store in a static variable. For example:
    • Ufront has a class called NekoSession, which was entirely static methods and variables, and made assumptions that the statics would be reset between requests. Wrong! Having the session cached between different requests (by different users) was a disaster – everytime you click a link you would find yourself logged in as a different user. Needless to say we needed to refactor this and not use statics :) To approach it differently, you could use a static var sessions:StringMap<SessionID, SessionData> and actually have it work appropriately as long as the cache stayed alive.
    • Avoid singletons like Server.currentUser, or even User.current – these static variables are most likely going to be cached between requests leading to unusual results.
Categories
Edtech Haxe

Cross post: Why the Pomodoro technique doesn’t work for me

In case you’re interested, on my Personal Blog I have a post “Why the Pomodoro Technique Doesn’t Work For Me”.  It’s about time management for creative workers.  Have a look if you’re interested…

Categories
Haxe

Language and API Design

Paul Graham has written some great essays over time, and most of them hold up surprisingly well as they age.  Take this article, “Five Questions About Language Design“.  Written in May 2001.  That’s over 12 years ago from today as I read it, and many of his predictions have come to pass (the dominance of web applications, the rise in popularity of niche programming languages), and most of his logic is still sound, and still insightful.

Two particular points I found interesting, and would be interesting to the Haxe community I spend most of my time in:

4. A Language Has to Be Good for Writing Throwaway Programs.

You know what a throwaway program is: something you write quickly for some limited task. I think if you looked around you’d find that a lot of big, serious programs started as throwaway programs. I would not be surprised if most programs started as throwaway programs. And so if you want to make a language that’s good for writing software in general, it has to be good for writing throwaway programs, because that is the larval stage of most software.

I think Haxe is beginning to show it’s strength here in gaming.  People are using it for Ludlam Dare (write a game in a single weekend), and then later the game scales up to be a full-featured, multi-platform, often commercial game.  Haxe is good for a quick win (especially when targetting flash), but then you have the options to target so many other platforms as your vision for your game/app expands.

I’d love to see ufront get this good.  I have so many things I would like to use as an app, but which I don’t feel justify an ongoing subscription to an online company.  Many of them could get an MVP (minimal viable product) out in a weekend, if the framework was good enough.  And it’s getting good enough, but still a little way to go :)

And then this:

2. Speed Comes from Profilers.

Language designers, or at least language implementors, like to write compilers that generate fast code. But I don’t think this is what makes languages fast for users. Knuth pointed out long ago that speed only matters in a few critical bottlenecks. And anyone who’s tried it knows that you can’t guess where these bottlenecks are. Profilers are the answer.

Language designers are solving the wrong problem. Users don’t need benchmarks to run fast. What they need is a language that can show them what parts of their own programs need to be rewritten. That’s where speed comes from in practice. So maybe it would be a net win if language implementors took half the time they would have spent doing compiler optimizations and spent it writing a good profiler instead.

Nice insight.  I sometimes worry that the code I’m writing will be slow.  It’s hard to know if optimisations are worth it, especially if it may compromise code readability etc.  A better solution would be to focus on making profiling easy.  On the JS target you can easily use the browser tools to show profiling information.  Another option is to bake it into your Haxe code, something that the “massivecover” lets you do.  Wiring that support from massivecover into ufront or other frameworks, and generating pretty reports, would be a very cool way to encourage people to write fast apps.

Categories
Haxe

You cannot use @:build inside a macro : make sure that your enum is not used in macro

If you ever come across this error, it can be a bit cryptic to understand.

Here’s some sample code that produces the error:

import haxe.macro.Expr;
import haxe.macro.Context;
import neko.Lib;

class BuildInMacroTests 
{
    public static function main() {
        var s:MyModel = new MyModel();
        MyMacro.doSomething( s );
    }
}

class MyModel extends sys.db.Object // has a build macro
{
    var id:Int;
    var name:String;
}

private class MyMacro 
{
    public static macro function doSomething(output : Expr) : Expr {
        return output;
    }
}

I’m still not sure I have a full grasp on it, but here’s a few pointers:

  • It’s not necessarily an enum – it can happen with a class or interface that has a “@:build” macro also.
  • The basic sequence happens in this order:
    • BuildInMacrosTest.main() is the entry point, so the compiler starts there and starts processing.
    • When it hits “var s:MyModel”, it does a lookup of that type, realizes the class needs a build macro to run, and runs the build macro.
    • When it hits “MyMacro.doSomething()”, it types the expression, and realizes that it is a macro call.  To run the macro, it must find the macro class, load it into the macro context, and execute it.
    • It finds the macro class, it’s in this file.
    • It tries to load this module (hx file) into the macro context, so it goes through the whole process of typing it again.
    • As it’s loading it into the macro context, it hits the “MyModel” build macro again, which it can’t do at macro time, so it spews out the error.
  • The basic solutions:
    • Wrap your build macro declarations in conditionals:
      #if !macro @:build(MyMacro.build()) #end class Object { ... }
    • Wrap anything that is not a macro in conditionals:
      #if !macro 
        class BuildInMacroTests {}
        class MyModel {}
      #else
        class MyMacro {}
      #end
    • Keep your macros in seperate files:
      BuildInMacroTests.hx:
          class BuildInMacroTests {
              public static function main() {
                  var s:MyModel = new MyModel();
                  MyMacro.doSomething( s );
              }
          }
      
          class MyModel extends sys.db.Object {
              var id:Int;
              var name:String;
          }
      
      MyMacro.hx:
          import haxe.macro.Expr;
          import haxe.macro.Context;
          class MyMacro {
              public static macro function doSomething(output : Expr) : Expr {
                  return output;
              }
          }

A combination of the 1st and 3rd solutions is probably usually the cleanest.

Good luck!

Categories
Haxe

A Haxe/JS Debugging Tip

When targetting Haxe JS in Haxe 3, the output is “modern” style, which means, to prevent polluting the global namespace or conflicting with other libraries, the output is wrapped in a simple function:

(function() {})()

Which is great.  And if you place breakpoints in your code, using:

js.Lib.debug();

Then your browser will launch the debugger inside this Haxe context, and you have access to all your classes etc.  But what if you want to fire up the browser’s JS console, and gain arbitrary access to something in your Haxe code?  Because it’s all hidden inside that function, you can’t.

Unless you have a workaround, which looks like this:

class Client 
{
    @:expose("haxedebug") @:keep
    public static function enterDebug()
    {
        js.Lib.debug();
    }
}

What’s going on: we have a class, and a function: “enterDebug”.  This function can go in any class that you use, really – it doesn’t have to be in Client or your Main class or anything.

The “js.Lib.debug()” statement launches the debugger in the haxe context, as described before.  But the “@:expose” metadata exposes this function outside of the Haxe context.  The string defines what we expose it as: rather than the default “Client.enterDebug()”, we’ll just have “haxedebug()”.  And the “@:keep” metadata makes sure this won’t get deleted by the compilers Dead Code Elimination, even though we may never explicitly call this function in our code.

Now that we’ve done that, recompile, and voilà!  You can type “haxedebug()” into the Javascript console and start fiddling around inside the Haxe context.  Enjoy.

 

Categories
Haxe

Creating Complex URL Routing Schemes With haxe.web.Dispatch

Complex Routing Schemes with haxe.web.Dispatch

I’ve looked at haxe.web.Dispatch before, and I thought it looked really simple – in both a good and a bad way. It was easy to set up and fast. But at first it looked like you couldn’t do complex URL schemes with it.

Well, I was wrong.

At the WWX conference I presented some of the work I’ve done on Ufront, a web application framework built on top of Haxe. And I mentioned that I wanted to write a new Routing system for Ufront. Now, Ufront until now has had a pretty powerful routing system, but it was incredibly obese – using dozens of classes, thousands of lines of code and a lot of runtime type checking, which is usually pretty slow.

Dispatch on the other hand uses macros for a lot of the type checking, so it’s much faster, and it also weighs in at less than 500 lines of code (including the macros!), so it’s much easier to comprehend and maintain. To wrap my head around the ufront framework took me most of the day, following the code down the rabbit hole, but I was able to understand the entire Dispatch class in probably less than an hour. So that is good!

But there is still the question, is it versatile enough to handle complex URL routing schemes? After spending more time with it, and with the documentation, I found that it is a lot more flexible than I originally thought, and should work for the vast majority of use cases.

Learning by example

Take a look at the documentation to get an idea of general usage. There’s some stuff in there that I won’t cover here, so it’s well worht a read. Once you’ve got that understood, I will show how your API / Routing class might be structured to get various URL schemes:

If we want a default homepage:

function doDefault() trace ( "Welcome!" ); 

If we want to do a single page /about/:

function doAbout() trace ( "About us" ); 

If you would like to have an alias route, so a different URL that does the same thing:

inline function doAboutus() doAbout();

If we want a page with an argument in the route /help/{topic}/:

function doHelp( topic:String ) { 
 trace ( 'Info about $topic' ); 
} 

It’s worth noting here that if topic is not supplied, an empty string is used. So you can check for this:

function doHelp( topic:String ) { 
 if ( topic == "" ) { 
 trace ( 'Please select a help topic' ); 
 } 
 else { 
 trace ( 'Info about $topic' ); 
 } 
} 

Now, most projects get big enough that you might want more than one API project, and more than one level of routes. Let’s say we’re working on the haxelib website (a task I might take on soon), there are several pages to do with Projects (or haxelibs), so we might put them altogether in ProjectController, and we might want the routing to be available in /projects/{something}/.

If we want a sub-controller we use it like so:

/*
 /projects/ => projectController.doDefault("")
 /projects/{name}/ => projectController.doDefault(name)
 /projects/popular/ => projectController.doPopular
 /projects/bytag/{tag}/ => projectController.doByTag(tag) 
*/ 
function doProjects( d:Dispatch ) { 
 d.dispatch( new ProjectController() ); 
} 

As you can see, that gives a fairly easy way to organise both your code and your URLs: all the code goes in ProjectController and we access it from /projects/. Simple.

With that example however, all the variable capturing is at the end. Sometimes your URL routing scheme would make more sense if you were to capture a variable in the middle. Perhaps you would like:

/users/{username}/ 
/users/{username}/contact/ 
/users/{username}/{project}/ 

Even this can still be done with Dispatch (I told you it was flexible):

function doUsers( d:Dispatch, username:String ) { 
 if ( username == "" ) { 
 println("List of users"); 
 } 
 else { 
 d.dispatch( new UserController(username) ); 
 } 
} 

And then in your username class:

class UserController { 
 var username:String; 
 public function new( username:String ) { 
 this.username = username; 
 } 
 function doDefault( project:String ) { 
 if ( project == "") { 
 println('$username\'s projects'); 
 } 
 else { 
 println('$username\'s $project project'); 
 } 
 } 
 function doContact() { 
 println('Contact $username'); 
 } 
} 

So the username is captured in doUsers(), and then is passed on to the UserController, where it is available for all the possible controller actions. Nice!

As you can see, this better represents the heirarchy of your site – both your URL and your code are well structured.

Sometimes, it’s nice to give users a top-level directory. Github does this:

http://github.com/jasononeil/

We can too. The trick is to put it in your doDefault action:

function doDefault( d:Dispatch, username:String ) {
  if ( username == "" ) { 
 println("Welcome"); 
 } 
 else { 
 d.dispatch( new UserController(username) ); 
 } 
} 

Now we can do things like:

/ => doDefault() 
/jason/ => UserController("jason").doDefault("") 
/jason/detox => UserController("jason").doDefault("detox") 
/jason/contact => UserController("jason").doContact() 
/projects/ => ProjectController.doDefault("") 
/about/ => doAbout() 

You can see here that Dispatch is clever enough to know what is a special route, like “about” or “projects/something”, and what is a username to be passed on to the UserController.

Finally, it might be the case that you want to keep your code in a separate controller, but you want to have it stay in the top level routing namespace.

To use a sub-controller add methods to your Routes class:

inline function doSignUp() (new AuthController()).doSignUp(); 
inline function doSignIn() (new AuthController()).doSignIn(); 
inline function doSignOut() (new AuthController()).doSignOut(); 

This will let the route be defined at the top level, so you can use /signup/ rather than /auth/signup/. But you can still keep your code clean and separate. Winning. The inline will also help minimise any performance penalty for doing things this way.

Comparison to Ufront/MVC style routing

Dispatch is great. It’s light weight so it’s really fast, and will be easier to extend or modify the code base, because it’s easier to understand. And it’s flexible enough to cover all the common use cases I could think of.

I’m sure if I twisted my logic far enough, I could come up with a situation that doesn’t fit, but for me this is a pretty good start.

There are three things the old Ufront routing could do that this can’t:

  1. Filtering based on whether the request is Post / Get etc. It was possible to do some other filtering also, such as checking authentication. I think all of this can be better achieved with macros rather than the runtime solution in the existing ufront routing framework.
  2. LocalizedRoutes, but I’m sure with some more macro love we could make even that work. Besides, I’m not sure that localized routes ever functioned properly in Ufront anyway ;)
  3. Being able to capture “controller” and “action” as variables, so that you can set up an automatic /{controller}/{action}/{...} routing system. While this does make for a deceptively simple setup, it is in fact quite complex behind the scenes, and not very type-safe. I think the “Haxe way” is to make sure the compiler knows what’s going on, so any potential errors are captured at compile time, not left for a user to discover.

Future

I’ll definitely begin using this for my projects, and if I can convince Franco, I will make it the default for new Ufront projects. The old framework can be left in there in case people still want it, and so that we don’t break legacy code.

I’d like to implement a few macros to make things easier too.

So

var doUsers:UserController; 

Would automatically become:

function doUsers( ?d:Dispatch, username:String ) { 
 d.dispatch( new UserController(username) ); 
} 

The arguments for “doUsers()” would match the arguments in the constructor of “UserController”. If “username” was blank, it would still be passed to UserController.default, which could test for a blank username and take the appropriate action.

I’d also like:

@:forward(doSignup, doSignin, doSignout) var doAuth:AuthController; 

To become:

function doAuth( ?d:Dispatch ) { 
 d.dispatch( new AuthController() ); 
} 
inline function doSignup() (new AuthController()).doSignup(); 
inline function doSignin() (new AuthController()).doSignin(); 
inline function doSignout() (new AuthController()).doSignout(); 

Finally, it would be nice to have the ability to do different actions depending on the method. So something like:

@:method(GET) function doLogin() trace ("Please log in");

@:method(POST) function doLogin( args:{ user:String, pass:String } ) { 
 if ( attemptLogin(user,pass) ) { 
 trace ( 'Hello $user' ); 
 } 
 else { 
 trace ( 'Wrong password' ); 
 } 
}
 
function doLogin() { 
 trace ( "Why are you using a weird HTTP method?" ); 
} 

This would compile to code:

function get_doLogin() { 
 trace ("Please log in") 
} 

function post_doLogin( args:{ user:String, pass:String } ) { 
 if ( attemptLogin(user,pass) ) { 
 trace ( 'Hello $user' ); 
 } 
 else { 
 trace ( 'Wrong password' ); 
 } 
} 

function doLogin() { 
 trace ( "Why are you using a weird HTTP method?" ) 
} 

Then, if you do a GET request, it first attempts “get_doLogin”. If that fails, it tries “doLogin”.

The aim then, would be to write code as simple as:

class Routes { 
 function doDefault() trace ("Welcome!"); 
 var users:UserController; 
 var projects:ProjectController; 
 @:forward(doSignup, doSignin, doSignout) var doAuth:AuthController; 
} 

So that you have a very simple, readable structure that shows how the routing flows through your apps.

When I get these macros done I will include them in ufront for sure.

Example Code

I have a project which works with all of these examples, in case you have trouble seeing how it fits together.

It is on this gist.

In it, I don’t use real HTTP paths, I just loop through a bunch of test paths to check that everything is working properly. Which it is… Hooray for Haxe!

Categories
Edtech Haxe

It’s free, but do you mind paying?

Here’s something I’d like to see more of: UberWriter is free (open source, GPL3), yet by default if you go to install it you’ll have to pay $5.  The old “free as in freedom of speech” rather than “free as in free beer”.

From the creator’s point of view, “Free as in speech” means several things:

  • The project is open to collaboration – if you have something to add, please do!
  • If you see a different future for this project than I do, you don’t have to ask my permission.  You can build on what I’ve started and do something new and different.  I may or may not want what to merge your changes back in.
  • If you’re not someone that’s likely to pay me money (you have a different native language, you live in an area I can’t sell to, you’re don’t have enough money…) then that’s okay, feel free to take what I’ve done anyway and see if you can build on it.
  • If you’re really poor (developing world, unfunded startup, student) then you can still use the software and pay it back (or pay it forward) later.  I’m just glad people are using (and enjoying) something I’ve made.
  • If you’re worried about security, you can know exactly what your software is doing by getting a developer to audit the code.  I promise I’m not working for your corrupt government.
  • If I ever close shop and discontinue the product, you can keep using it, without having to worry about where to get new copies.  Feel free to make new copies, or to hire a developer to maintain it, even to release your own version.  I promise not to get angry.

But from the creator’s point of view, “free as in beer” has several negative implications:

  • I don’t have a strong plan to make money off this, so it will remain a side-project or a hobby.  I might love it, but I’ll never give it the time, money and support it needs to be amazing.
  • I have to pay my bills.  So the work I do that pays bills will always be more important than the work I do for this project.

Commercial projects that succeed (especially software projects) often have a founder who is hugely passionate about it, and absolutely stoked that they get to work on their passion as their full time job.  For many open source developers, this isn’t a reality.

So what if you want to balance those two things:

  1. Create, invent, make art.  And give it away to the world in the most generous, considerate way possible.
  2. Make money so that I can continue to develop this, perfecting it and supporting it and giving it every opportunity to succeed.

Well, that’s what UberWriter is doing.  It has all the same freedoms as “free as in speech”.  And if you really don’t want to pay for it, you can find ways to get it for free, and that’s allowed too, it’s not illegal and it won’t make the creator cry.  But, if you want to get it the easiest way, and you want to support the developer, it’s $5.

I hope he does well, makes some money, maybe even is able to make it his main job – so that he is empowered to make some good software become really great software.  And I hope to see more people balance this act of being generous with earning a living… myself included.

Categories
Haxe

20 Questions I Asked Myself

Bernadette Jiwa is a marketing / branding / idea-spreading expert, (and a Perth local!), and when you subscribe to her “The Story of Telling” she sends you a PDF with 20 questions to ask yourself before launching your next product or idea.

At WWX 2013, this year’s Haxe conference, I’ll be presenting a talk on developing Web Apps, and using “ufront” to do that.  Ufront is a framework made by mostly by Franco Ponticelli, and initially modelled on the .NET MVC framework.  Ufront is pretty powerful, but it’s lacked documentation and hasn’t had a lot of users.  I’ve used it for my last project, added a bunch of helpers, and think it’s ready for some more attention.  A re-launch, if you will.

So I went through the 20 questions Bernadette gives in her PDF, and tried to apply it to ufront, Haxe and web-apps.

My answers to the “20 questions” for ufront (PDF)

It was pretty eye-opening, and really helped me zero in on why I think this approach is better than rails, django, node/express or other frameworks.  I came up with this tagline:

ufront: the client / server framework for Haxe that lets you write the next big (website/web-app/mobile-app/game/thing)

Categories
Haxe

The new Map syntax in Haxe 3

I was using the Haxe3 Release Candidate for well over a month before I realised that the new Map data structures could be created with a nice syntax.  The basic gist is this:

var map1 = [ 1=>"one", 2=>"two", 3=>"three" ];

Like Arrays, but you use the “=>” operator to define both the key and the value.  And Haxe’s type inference is as strong as ever.  Here’s what the compiler picks up:

var map1 = [ 1=>"one", 2=>"two", 3=>"three" ];
$type (map1); // Map<Int, String>

var map2 = [ "one"=>1, "two"=>2, "three"=>3 ];
$type (map2); // Map<String, Int>

var map3 = [
    Date.now() => "Today",
    Date.now().delta(24*60*60*1000) => "Tomorrow",
    Date.now().delta(-24*60*60*1000) => "Yesterday"
];
$type (map3); // Map<Date, String>

var map4 = [
    { name: "Tony Stark" } => "Iron Man",
    { name: "Peter Parker" } => "Spider Man"
];
$type (map4); // Map<{ name : String }, String>

var map5 = [
    [1,2] => ["one","two"],
    [3,4] => ["three","four"],
];
$type (map5); // Map<Array<Int>, Array<String>>

So it’s pretty clever.  If for some reason you need to type explicitly to StringMap, IntMap, or ObjectMap, you can:

var stringMap:StringMap<Int> = [
    "One" => 1,
    "Two" => 2
];
$type(stringMap); // haxe.ds.StringMap<Int>

var intMap:IntMap<String> = [
    1 => "One",
    2 => "Two"
];
$type(intMap); // haxe.ds.IntMap<String>

var objectMap:ObjectMap<{ name:String }, String> = [
    { name: "Tony Stark" } => "Iron Man",
    { name: "Peter Parker" } => "Spider Man"
];
$type(objectMap); // haxe.ds.ObjectMap<{ name : String }, String>

But if you try to do anything too funky with types, the compiler will complain.  Haxe likes to keep things strictly typed:

var funkyMap = [
    { name: "Tony Stark" } => "Iron Man",
    { value: "Age" } => 25
]; // Error: { value : String } has no field name ... 
   // you are a bad person, and your items are not comprehensible 
   // to Haxe's typing system

Finally, Haxe won’t let you do define duplicate keys using this syntax:

var mapWithDuplicates = [
    1 => "One",
    2 => "Two",
    1 => "uno"
]; // Error: Duplicate Key ... previously defined (somewhere)

If you’re using an object map, it’s only a duplicate if you’re dealing with the exact same object, not a similar one.  For example, this is allowed:

var similarObjectKeys:ObjectMap<Array<Int>, String> = [
    [0] => "First Array object",
    [0] => "Second Array object"
]; // Works, you now have 2 items in your map.

But if you use the exact same object, Haxe will pick it up:

var key = [0];
var sameObjectKey = [
    key => "First Array object",
    key => "Second Array object"
]; // Error: Duplicate Key ...

So there you have it.  A nice feature that I didn’t see mentioned anywhere else.  Thanks Haxe team!

….

Update:

It’s worth mentioning that once you have created your map, you can use array access (“[” and “]”) to read or modify entries in your map.

var map = [ 1=>"one", 2=>"two", 3=>"three" ];

// Reading a value
map[1];   "one"
var i = 2;
map[i];   "two"
map[++i]; "three"

// Setting a value
map[4] = "four";
map[1] = "uno";
map[i] = "THREE";
map;   // [ 1=>"uno", 2=>"two", 3=>"THREE", 4=>"four" ]
Categories
Haxe

Trying to understand dependency injection

Coming closer to the end of writing my first really complex app, I’m beginning to wish I’d taken the time at the start to properly learn and grasp some of the concepts that I see other programmers using.  My client-side code uses a basic MVC pattern, but a bunch of Flash Developers use something that to me comes across as much more complex – as seen in RobotLegs or some of the Haxe ports, such as MMVC.

These frameworks are all about Inversion Of Control (and because we love acronyms: IOC).  And they use Dependency Injection (DI) to do this.  You can see already why the learning curve can be steep, especially if you haven’t come across this pattern before.

So today I googled “I don’t understand dependency injection” and got this fantastic explanation by Kevin William Pang.  I found that really helpful, and I’m sold.  I’d been wondering how the hell I was supposed to unit-test some of my more complex client code anyway… If you’re one that, like me, hasn’t understood this concept in much detail: his article is well worth a read, I won’t bother to try and explain it here.

But, I’m still not sure where the fancy dependency injection tools and frameworks come in.  I see the value in it, and I think “injecting” it into the constructor by using arguments given to the constructor seems just fine.  Meanwhile, the tools which are supposed to help, such as SwiftSuspenders or Minject seem to me to be really verbose, and not offer a lot more than the constructor method.

Maybe that style is more relevant for gaming.  Or maybe it’s benefits show up with more complex projects.  Or maybe I just haven’t been enlightened.

Either way, I’ll try to get enlightened… I think it’s worth spending time learning these patterns.  They’ve obviously gained traction for a reason, and a lot of better developers than me swear by them, so I’ll try wrap my head around it.  Maybe one day it’ll seem as clear to me as MVC, or even OOP do now (both of which took me a while at first).

If you have any suggestions / comments / explanations – I’d love to hear them in the comments section.

Categories
Haxe

Haxe3 Features: Variable Substitution in Haxe3 (aka String Interpolation)

One of my favourite features in the upcoming Haxe3 is also one of the simplest: String Interpolation (also called variable substitution).  If you were using Std.format() in Haxe2, you’ll recognise it.  It lets you do this:

var name = "Jason";
var age = 25;
trace ('My name is $name, I am $age years old.');
// My name is Jason, I am 25 years old

The first point to make is that it is triggered by using single quotation marks (‘), rather than double (“).  If you use double quotes, everything is treated as a plain string:

trace ("My name is $name, I am $age years old.");
// My name is $name, I am $age years old

The next point to make is that this all happens at compile time.  So the trace we made above, in Javascript, would output as:

console.log("My name is " + name + ", I am " + age + " years old");

So you can only do this with strings you have at compile time, as you’re writing the code.  If you want Strings that are given at runtime to have variable interpolation, you should use a templating library like erazor.

Now a few other things to note:

  1. If you want to put in a normal dollars sign, you can use two $, like this:
    trace ('The item costs $20');
    // "20" is not a valid variable name, so it is ignored.
    // Same as ("The " + item + " costs $20");
    
    trace ('That price is in $USD');
    // Error: Unknown Identifier "USD"
    trace ('That price is in $$USD');
    // Same as ("That price is in $" + "USD");
    
    var cost = 25;
    trace ('That item costs $$$cost');
    // The first two "$" are for the literal sign, the third is part of the variable.
    // Same as ("That item costs $" + cost);
  2. If you want to access anything more than a straight variable, use curly brackets:
    // Property access
    trace ('My name is ${user.name} and I am ${user.age} years old.');
    // Same as: "My name is " + user.name + " and I am " + user.age + " years old.";
    // Outputs: My name is jason and I am 25 years old.
    
    // A simple haxe expression
    trace ('$x + $y = ${x + y}');
    // Same as: "" + x + " + " + y + " = " + (x + y);
    // Outputs: 1 + 2 = 3
    
    // A function call 
    trace ('The closest Int to Pi is ${Math.round(3.14159)}');
    // Same as: "The closest Int to Pi is " + Math.round(3.14159);
    // Outputs: The closest Int to Pi is 3
  3. There is no HTML escaping, so be careful:
    var bold = "<b>Bold</b>";
    trace ('<i>Italic</i> $bold');
    // <i>Italic</i> <b>Bold</b>
    
    var safeBold = StringTools.htmlEscape("<b>Bold</b>");
    trace ('<i>Italic</i> $safeBold');
    // <i>Italic</i> &lt;b&gt;Bold&lt;/b&gt;
    
    var safeEverything = StringTools.htmlEscape('<i>Italic</i> $bold');
    trace (safeEverything);
    // &lt;i&gt;Italic&lt;/i&gt; &lt;b&gt;Bold&lt;/b&gt;

There you have it: String Interpolation, one of the most helpful (and easy to comprehend) features of the upcoming Haxe3 release.

Categories
Haxe

Mapping any SSH Server to a Network Drive in Ubuntu

My friend Justin showed me a cool trick this week – mapping any SSH server as a network drive in Ubuntu.  This is really useful for web development, where you have a whole bunch of servers that you have to connect to, transfer files to and from, and make small edits.

The integration is pretty seamless – it shows up in my file browser (just like a local USB drive), I can open files in any app I want, and every time I save the changes are synced back.  Pretty cool.

Here’s how:

  1. Open your Home Folder
  2. In the menu, choose “File -> Connect to Server”
  3. Change the type to “SSH”
  4. Enter the address of your server.
    eg. myvps.mydomain.com or 192.168.1.5
  5. Enter your username and password.
    (Note: If you have public / private keys set up, just enter the username.)
  6. Click Connect

This all comes built in with Ubuntu 12.04.

This is me browsing files on an Amazon EC2 instance, opening a few of them and editing them. Sweet!

Categories
Edtech Haxe

A new project: OurSchoolDiary

So I’m working on a new project that has been brewing in my mind for a while. I’m going to aim to have a working prototype by the end of August, have users beta testing through till December and hopefully have my first paying clients by next year.

A screenshot of some of the first code for the project.

What is it?

The App is called “OurSchoolDiary”. It lets each school create their own School Diary app, where students can sign up and keep track of their homework, assignments and tests. The big win over competing solutions is that here, when the teacher adds an assessment or task, it automatically shows up in the students diary. We’re helping even unorganised students stay on top of their work.

On top of that, we’ll let parents receive emails informing them of their child’s progress. This way parents and teachers can work together to make sure they support their child’s education as much as possible.

And the final advantage, the school gets to show off how cutting-edge they are because they get their own “app”.  Logo and everything!

A Blog

As I go, I hope to keep track of my progress on this blog. (I’ll post to this category specifically, so you can subscribe by RSS).

Why keep a blog?

  • For the history, in case it is ever significant
  • For my own reflection and understanding, regardless of if it is a success.
  • To show off some cool technologies I’m using:
    • Web Apps – I’m not aiming for distribution through the App Store, I’m going a full HTML stack. Not even a hybrid using PhoneGap. This is a page you can view in a browser, and use offline, and the experience will be identical.
    • Haxe – this is a sweet programming language that lets you target Flash, Javascript, PHP, Java, C#, C++ and a platform called Neko. Up until now it’s main success has been in the indie-games market, but I believe it is mature enough to work for a full web-based productivity app.  Client and server. We’ll find out how that goes!
  • To look back on my decision making processes after we’ve launched, and see if my thinking was accurate.
  • To raise interest in the project
  • To explore some of the topics I find fascinating:
    • Usability
    • Design
    • New Marketing
    • Business Strategy
    • Contribution to the community
    • And a few more…

That’s all for now. I’ll try submit a couple of posts a week (at least) until I get this thing off the ground.