Categories
Front End Development Haxe Haxe Log Small Universe

Writing a framework: web application architectures I’m inspired by

  1. I’m going to write a web app framework (again) (maybe)
  2. Writing a framework: web application architectures I’m inspired by

A look at recurring architectural patterns I see in both front end and back end, that have potential to tie together into a full stack framework.

In my last post I said part of my reason for wanting to write another web framework was that I’ve been exposed to similar ideas in both the front end and back end, and wanted to experiment with an architecture that ties it all together.

In this post, I’m going to explore those: Unidirectional data flow, the elm architecture, CQRS and event sourcing. And then I’ll look at the common themes I see tying them together.

State, views, and one-directional data flow

Almost every popular front end architecture I’ve encountered recently shares an idea: you have data representing the current state of your page, and you use that state to render the view that the user can see and interact with.

If you want to change something on the page, you don’t update the page directly, you update the state, and that causes the view to update.

You might have heard this described in a few ways:

  • Data Down, Actions Up: the data flows down from the state to the view. And the actions come up from the view to modify the state.
  • Model, View, Controller: Your have a data “model” layer that holds information about the current state, and a “view” layer that knows how to update it, and a “controller” that does the communication in the middle.
  • Unidirectional data flow: You often hear this term in the React ecosystem. Data flows from the top of your view down into it the nested components. So a component only knows about the data passed into it, and nothing else. The data always flows downwards. How do you change the data then? As well as passing down the data from the state, we also pass down a function that can be used to update the state.
An illustration showing a circular flow of data and actions.  There is an arrow from “State” to “View” labelled “Data down” and from “View” back to “State” labelled “Actions Up”.
Actions Up, Data Down. Every application has “State” (data for the application) and “View” (what the user sees). The view is always rendered and updated from the current state. (We call this “Data down”). And the view can trigger actions to update the state. (We call this “Actions up”).

This pattern is used in frameworks as diverse as React and Ember and Elm. Why is it so common? Here’s some of the advantages it provides:

  • Each function in your code has one job: turn the state into a view, or to update the state in response to an action. This makes it nice and easy to wrap your head around an individual piece of code.
  • The functions don’t need to know about each other. If you have a “to-do list” app, and an action that adds a new to-do item – you don’t need to know the 3 different places it’ll show up in the UI and change them all – you just update the state. Likewise, if you want to add a new view, you don’t have to touch all the places that edit the “to-do” list, you can just look at the current state, however it came to be, and use that data to render your page.
  • It becomes easy to debug. If there’s a bug, you can check if the state data is correct. If the state data looks correct, then the bug will be in one of your view functions. If that state data looks wrong, then the bug is probably in one of your action functions that change the state.
  • It makes it easier to write tests. Your action functions test simple things: if our state looks like this, and we perform this action, then the state should now look like that. That’s easy to write a unit test for. And for your view functions, you can write tests that use mocked state data to test all the different ways your view might be rendered. You can even create “stories” with Storybook, or take visual snapshots, for quick visual tests of many different ways the UI looks.

The Elm architecture

Elm takes this concept to the extreme. By making a language and a framework that are tightly integrated, they can force you to follow the good advice from “data down, action up”.

The Elm architecture has four parts:

  • Model – the state of your application,
  • View – a way to turn your state into HTML,
  • Messages – a message triggered by the UI (like clicking a button) that has information about an action the user is requesting (like attempting to mark an item as complete)
  • Update – a way to update your state based on “messages”
An illustration titled "The Elm Architecture" showing a circular flow diagram between State, View, Html and Update. There is some psuedocode in each of "State" "View" "Html" and "Update" which is described in the caption.

The Elm Architecture.

State. The page has some current state that is used to render the view. We use the type system to make sure the structure of this is consistent. In this example, it might have a title “My work” and a list of tasks like “Plan week” and “Check Slack”

View. Render HTML based on the current state. Following on the example above, we might have a view function that receives our state, and calls functions to render a <h1> element and a <ul> with the list of tasks, and a form for adding new to-do items.

Html. We never manually edit HTML or the DOM, we just update state, which updates the view, and the framework checks what HTML needs changing. In the example above, this would be the rendered h1, ul and form HTML / DOM produced by the framework.

Update. The only type of events we trigger from the HTML view are strictly typed and exactly what our update function is expecting. When an action happens, the framework uses the update function to consider what the new state should be based on the previous state and the action that occurred. Following on the example, an “AddTodo” action might append a new item to the list. While the “CompleteTask” action might remove an item from the list. The only way to update the state is to use the update function, one action at a time. This makes state management easy to unit test and debug!

There is also the opportunity to interact with the outside world – things like a Backend API. Elm provides a way to do this from the Update function (which can in turn trigger new actions) but it doesn’t have strong opinions on what happens in the Backend API.

And Elm will enforce this. You can only update the view of the page via your “view” function. Your “view” function only has access to the model to decide what to render. The only actions your view can trigger are the list of messages you define. And you can only update the state in the model using your update function to change things as messages come through.

And Elm has a fantastic type system and compiler to make this all work really nicely together. To show how it works, imagine you have a “to-do list” and you have a button to mark an item as complete:

  • In your “view” function where you render the button, you can set an “onClick” event.
  • The “onClick” event will trigger a message that something has happened. You decide the names of the possible messages that can happen, so we might chose a name like “MarkToDoAsComplete”.
  • Because we have told Elm we have a message called “MarkToDoAsComplete”, it will force us to handle this code in our “update” function.
  • In our “update” function we write the code that updates the model, setting the current to-do as complete.
  • When a user views the page they see the button. When they click the button, the message is triggered, the update function is run, the model is updated with the to-do item now being marked as complete, and the view will update in response. By the time we did all the things the compiler asks, it all just worked.

The great thing about this is that Elm knows exactly what code is needed for all the pieces of your application to work. If you’re missing anything, it will give you a nice error message showing what to fix. This means that you never get runtime errors in your Elm code.

But even nicer than that, it means you have a great workflow:

  • You add your button, and a “MarkToDoAsComplete” message
  • The compiler tells you that message needs to be added to your list of app messages. You do that.
  • The compiler tells you that your “update” function needs to handle the message. You do that.
  • It now all works.

This “chase the compiler” workflow is what originally got me excited about the Elm language, not just the architecture – you can see Kevin Yank’s talk “Developer Happiness on the Front End with Elm” for a more detailed overview.

(As a bonus, if you do spot anything wrong, the strict framework for updating state based on actions, one at a time, allows powerful debugging tools like “time travel debugging”, where you can replay events one at a time to see their effect.)

Command Query Responsibility Separation (CQRS)

On the back end, we sometimes find a similar pattern to “data down, actions up”. It’s called “Command Query Responsibility Separation”. You separate the queries (data) and the commands (actions) into separate code paths, separate API endpoints, or even separate services.

If your back end uses an SQL database, you can think of the “queries” using SELECT statements, and the commands using INSERT or UPDATE statements.

An illustration titled "Command / Query Responsibility Separation CQRS" showing a circular flow diagram.

A Database is at the top, with an arrow labelled "SELECT FROM..." pointing to a section named "Query (Data Down)", which has an arrow to "Client UI", which has an arrow to "Command (Action Up)", which has an arrow labelled "INSERT INTO" pointing back to the database.
Command Query Responsibility Separation (CQRS) encourages splitting up the “queries” (ways of reading the state) from the “commands” (ways of modifying the state). This ends up with many of the same benefits as “data down, actions up”, but for back end endpoints or services.

We use separate code paths for Commands (writes) and Queries (reads). Rather than having an API return new data after a command, we have the client repeat the full query.

And you end up with similar advantages:

  • Each endpoint has one job.
  • The endpoints don’t need to know about each other.
  • It becomes easier to debug.
  • The command endpoints and the query endpoints can adopt different scaling strategies. For example caching can be applied to the “query” endpoints.

Event Sourcing

When we talked about the Elm architecture, we saw a front end framework with a strict way to update the current state: by processing one message at a time. Event sourcing brings a similar concept to our back end, and crucially, to our data and our “source of truth”.

It’s normal for the “source of truth” in a web application to be a database that represents the current state of all of your data. For a todo list, you might have a row for each todo item, and columns to set the text of the item, whether it is complete or not, and the order it appears in the list.

That table would be your source of truth.

A diagram titled "event sourcing" with two parts. One part is labelled "Source of truth: current state" and shows a traditional database table view with a row for each to-do task, and columns for "id", "task" and "complete".

The other part is labelled "Source of truth: events'. And shows a sequence of squares representing events: "Create Task", "Create Task", "Complete Task", "Create Task". Each event has associated data like an id or task text.

Event Sourcing – changing the source of truth. Traditionally in a web application the “source of truth” might be a database table that represents the current state of the application. In event sourcing, the source of truth is an event log: all of the actions that occurred, one at a time. We can use this event log to build up a view or the current state.

Event sourcing is about changing the “source of truth” to be the events that occurred. Rather than caring exactly which todo items currently exist, and if they are complete, we care about when a user created a task, or marked a task as complete, or changed the order of the tasks in a list. These are the “events”, and they are our “source” of truth.

And we can process them, one event at a time, to build up a view of the data (in event sourcing these are often called “projections” of the data). Some projections might look very similar to what we had before – a database table with a row for each todo item, a column for the item text, whether it is complete, and the order in the list.

The power of event sourcing is that we can also create other views of the data. Perhaps we want to create a trend line graph showing how many open tasks we’ve had over time. If our source of truth was the current state, we wouldn’t be able to tell you how many tasks you had open last week (or this week last year!) With event sourcing, we can go back over all of the events, and build new views of the data.

A diagram showing the flow of data in an event sourced application, from "Client Actions" to "Command Handler" to "Event Log" to "Projections" (we show two examples) to a view of the data (again with two examples).
Event Sourcing allows us to create different “projections” for different ways of viewing the data.

Client actions. We can start with actions that are triggered by the user. Event sourcing as a pattern has no strong opinions on how this is handled.

Command handler. A service takes the action coming in from the client (like ticking off a task) and decides if it is allowed. If it is, it adds it to the event log.

Event log. A log of all the events that have occurred. Often this is in a database table. In the diagram above, we show 4 events: “Create Task”, with id “1” and task “Draw Diagram”. Then “Create Task” with id “2” and task “Write post”. Then “Complete Task” with id “1”. (Meaning “Draw Diagram” is complete). Then “Create Task” with id “3” and task “Clean dishes”.

Here the diagram splits into two, because rather than just having the task list, we can process the events and generate other interesting data. These “projections” are different data to derived from the same events.

We have a “Your Tasks projection” that is a database table with “id”, “task” and “complete” columns. Based on the events, Task 1 “Draw diagram” is complete, while 2 “Write post” and 3 “Clean dishes” are not. This could be used to render a traditional to-do list UI.

And we have an “Open Tasks Graph projection” with two columns, “date” and “open tasks’. It shows how for any given date, we can see how many tasks were opened. We could use this t draw a trend line graph.

Similar to the front end with actions and state, this also opens up some powerful debugging options. We can replay the events and “time travel” to see exactly how our system responded, and look out for points where things may have gone astray.

Bringing it together: the common concepts I want my “Small Universe” framework to draw on

You probably spotted some similar themes running through the above architectures:

  • Keeping code to fetch data and code to process actions separate
  • Having a way to get the “current state” for a page from our data
  • Always rendering the pages based on that current state
  • Allowing the views to trigger actions or “events”
  • These events being tracked, and considered our source of truth
  • Responding to one event at a time to update our application data

Following these allows us to write code which is simpler – each function is focused on either fetching data for the current view, displaying the current view, or processing an action. That makes code easier to understand, easier to test, and simpler to debug.

Making sure our data is updated one action at a time also opens up potential for time travel debugging, which is an incredibly powerful feature for you as a developer when you’re investigating how something went wrong. It also leaves the door open for new features that you can build, being able to take full advantage of all the past user actions.

Finally, by strictly defining the shape of your state, and the set of actions (or “events” or “messages”) that are possible, you can have the framework and compiler do a lot of work for you, ensuring that if a button exists, it has an action, and the action updates the state, and the view reflects the updated state.

So you can find a very productive workflow where you start adding one new line of code for your feature, and the compiler will guide you all the way to completing the feature as valid code, and you can be relatively confident it’ll work.

So, for this “Small Universe” framework I’m starting, I am taking inspiration from these architectures to try build something that leads you to write code that is easy to write, understand, test and debug. Something that uses events as a source of truth to make it easy to build new features that process previous actions into features or views we hadn’t imagined up front. And something that leads to a happy and productive workflow, with the compiler able to provide ample assistance as you build out new features.

I’ll come back to describe the architecture I’m aiming for in a future post. But I hope this helps you understand the direction from which I’m approaching this project.

Have any questions? Things that weren’t clear? Ideas you want to share? I’d love to hear from you in the comments!

Categories
Haxe Haxe Log

My Haxe Log: Week #2

This week:

  • My token one day a week I continued on my Node-Webkit project.  This time I made externs for Kue (externs, which appear to be working) and FFMpeg (externs, not functional just yet).  Still enjoying working with Node-Webkit, and with the Node-API library especially.  Sad I didn’t get to make more progress on it this week.
  •  Ufront:
      • Make tracing / logging work reliably between multiple requests. After enabling neko.Web.cacheModule(), I began to find areas where Ufront was not very multiple-request-friendly. These would have surfaced later with a port to Client JS or Node JS, but it’s good to find them now.One problem was that our tracing and logging modules were behaving as if there was only one request at a time. This could result in a trace message for one request ending up being output to somebody else’s request, which is obviously bad.

        The problem is a tricky one, as trace() always translates to haxe.Log.trace(), and you with Ufront’s multiple-request-at-a-time design, you can’t know which request is the current one from a static method. If I think of a clever way to do it, possibly involving cookies and sessions, then I might include a HttpContext.getCurrentContext() static method. This would probably have to be implemented separately for each supported platform.

        The solution for now, however, was to not keep track of log messages in the TraceModule, but in the HttpContext. Then on the onLogRequest event, the trace modules get access to the log messages for the current context, and can output them to the browser, to a file, or whichever they choose.

        The downside is that you have to use httpContext.ufTrace() rather than trace(). I added a shortcut for this in both ufront.web.Controller and ufront.remoting.RemotingApiClass, so that in your controllers or APIs you can call uftrace() and it will be associated with the current request. There is also ufLog, ufWarn and ufError.

        I also made RemotingModule work similarly with tracing and logging – so logs go to both the log file and the remoting call to the browser.

      • Fix logging in ErrorModule. One of the things that made debugging the new ufront really hard was that when there was an Error, the ErrorModule displayed, but the trace messages did not get sent to the browser or the log file. I did a bit of code cleanup and got this working now.
      • Fixed File Sessions / EasyAuth. Once able to get my traces and logs working more consistently, I was able to debug and fix the remaining issues in FileSession, so now EasyAuth is working reliably, which is great.
      • Added Login / Logout for UF-Admin. With UF-Admin, I added a login screen and logout, that works with EasyAuth for now. I guess I will make it pluggable later… For now though it means you can set up a simple website, not worry about auth for the front end, but have the backend password protected. If you use EasyAuth for your website / app, the same session will work on the ufadmin page.
      • Created uf-content for app-generated files. I moved all app-generated files (sessions, logs, temp files etc) into a folder called “uf-content”. Then I made this configurable, and relative to httpContext.request.scriptDirectory. You can configure it by changing the contentDirectory option in your UfrontConfiguration. This will make it easier when deploying, we can have instructions to make that single directory writeable but not accessible via the web, and then everything that requires FileSystem access can work reliably from there.
      • Pushed new versions of the libraries. Now that the basics are working, I pushed new versions of the libraries to Haxelib. They are marked as ufront-* with version 1.0.0-beta.1. From here it will be easy to update them individually and move towards a final release.
      • Demo Blog App. To demonstrate the basics of how it works, and a kind of “best practices” for project structure, I created a demo app, and thought I would start with a blog. I started, and the basic setup is there, including the config structure and each of the controller actions, and the “ufadmin” integration. But it’s not working just yet, needs more work.
      • Identified Hair website. I have a website for a friend’s small business that I’ve been procrastinating working on for a long time. On Saturday I finally got started on it, and set up the basic project and routes in Ufront. In about 4 hours I managed to get the project set up, all the controllers / routes working, all the content in place and a basic responsive design with CSS positioning working. All the data is either HTML, Markdown or Database Models (which get inserted into views). Once I’ve got their branding/graphics included, I’ll use ufront to provide a basic way to change data in their database. And then if they’re lucky, I might look at doing some Facebook integration to show their photo galleries on the site.
Categories
Haxe Haxe Log

My Haxe Log: Week #1

Hello! For a reason I can’t comprehend, this page is the most visited page on my blog. If you’re looking for information about logging in Haxe, the “Logging and Tracing” page in the manual is a good start.

If you can be bothered, leave a comment and let me know what you’re looking for or how you came to be here. I’d love to know!

Jason.

Every week as part of my work and as part of my free time I get to work on Haxe code, and a lot of that is contributing to libraries, code, blog posts etc.  Yesterday was one of those frustrating days where I told someone I’d finish a demo ufront app and show them how it works, but I just ran into problem after problem and didn’t get it done, and was feeling pretty crap about it.

After chatting it out I looked back at my week and realised: I have done alot.  So I thought I should start keeping a log of what I’ve been working on – mostly for my own sake, so I can be encouraged by the progress I have made, even if I haven’t finished stuff yet.  But also in case anything I’m working on sparks interest or discussion – it’s cool to have people know what I’m up to.

So I’d like to start a weekly log.  It may be one of those things I do once and never again, or it may be something I make regular: but there’s no harm in doing it once.

So here we go, my first log.  In this case, it’s not just this week, some of it requires me to go back further to include things I’ve been working on, so it’s a pretty massive list:

  • Node Webkit: On Monday’s I work at Vose Seminary, a tertiary college, and I help them get their online / distance education stuff going.  Editing videos, setting up online Learning Management Systems etc.  I have a bunch of command line utils that make the video editing / exporting / transcoding / uploading process easier, but I want to get these into graphics so other staff can use it.  Originally I was thinking of using OpenFL / StablexUI.  I’m far more comfortable with the JS / Browser API than the Flash API however, and so Node-Webkit looked appealing.  On Monday I made my first Haxe-NodeJS project in over a year, using Clement’s new Node-API repo.  It’s beautiful to work with, and within an hour and a half I had written some externs and had my first “hello-world” Node-Webkit app.  I’ll be working on it again this coming Monday.
  • neko.Web.cacheModule: I discovered a way to get a significant speed-up in your web-apps.  I wrote a blog post about it.
  • Ufront: I’ve done a lot of work on Ufront this week.  After my talk at WWXthis year, I had some good chats with people and basically decided I was going to undertake a major refactor of ufront.  I’m almost done!  Things I’ve been working on this week (and the last several weeks, since it all ties in together):
    • Extending haxe.web.Dispatch (which itself required a pull request) to be subclassed, and allowing you to 1) execute the ‘dispatch’ and ‘executeAction’ steps separately and 2) allow returning a result, so that you can get the result of your dispatch methods.  This works much nicer with Ufront’s event based processing, and allows for better unit testing / module integration etc.  The next step is allowing dispatch to have asynchronous handlers (for Browser JS and Node JS).  I began thinking through how to implement this also.
    • After discovering neko.Web.cacheModule, I realised that it had many implications for Ufront.  Basically: You can use static properties for anything that is generic to the whole application, but you cannot use it for anything specific to a request.  This led to several things breaking – but also the opportunity for a much better (and very optimised) design.
    • IHttpSessionState, FileSession: the first thing that was broken was the FileSession module.  The neko version was implemented entirely using static methods, which led to some pretty broken behaviour once caching between requests was introduced.  In the end I re-worked the interface “IHttpSessionState” to be fairly minimal, and was extended by the “IHttpSessionStateSync” and “IHttpSessionStateAsync” interfaces, so that we can begin to cater for Async platforms.  I then wrote a fresh FileSession implementation that uses cookies and flat-files, and should work across both PHP and Neko (and in Future, Java/C#).  The JS target would need a FileSessionAsync implementation.
    • IAuthHandler / EasyAuth: At the conference I talked about how I had an EasyAuth library that implemented a basic User – Group – Permission model.  At the time, this also was implemented with Static methods.  Now I have created a generic interface (IAuthHandler) so that if someone comes up with an auth system other than EasyAuth, it can be compatible.  I also reworked EasyAuth to be able to work with different a) IHttpSessionState implementations and b) different IAuthAdapter’s – basically, this is an interface that just has a single method: `authenticate()`.  And it tells you if the user is logged in or not.  EasyAuth by default uses EasyAuthDBAuthAdapter, which compares a username and password against those in the database.  You could also implement something that uses OpenID, or a social media logon, or LDAP, or anything.  All this work trying to make it generic enough that different implementations can co-exist I think will definitely pay off, but for now it helps to have a well thought out API for EasyAuth :)
    • YesBoss: Sometimes you don’t want to worry about authentication.  Ufront has the ability to create a “tasks.n” command line file, which runs tasks through a Command Line Interface, rather than over the web.  When doing this, you kind of want to assume that if someone has access to run arbitrary shell commands, they’re allowed to do what they want with your app.  So now that I have a generic interface for checking authentication, I created the “YesBossAuthHandler” – a simple class that can be used wherever an authentication system is needed, but any permission check it always lets you pass.  You’re the boss, after all.
    • Dependency Injection: A while ago, I was having trouble understanding the need for Dependency Injection.  Ufront has now helped me see the need for it.  In the first app I started making with the “new” ufront, I wanted to write unit tests.  I needed to be able to jump to a piece of code – say, a method on a controller – and test it as if it was in a real request, but using a fake request.  Dependency injection was the answer, and so in that project I started using Minject.  This week, realising I had to stop using statics and singletons in things like sessions and auth handling, I needed a way to get hold of the right objects, and dependency injection was the answer.  I’ve now added it as standard in Ufront.  There is an `appInjector`, which defines things that should be injected everywhere (modules, controllers, APIs etc).  For example, injecting app configuration or a caching module or an analytics API.  Then there is the dispatchInjector, which is used to inject things into controllers, and the remotingInjector, which is used to inject things into APIs during remoting calls.  You can define things you want to make available at your app entry point (or your unit test entry point, or your standalone task runner entry point), and they will be available when you need them.  (As a side note, I now also have some great tools for mocking requests and HttpContexts using Mockatoo).
    • Tracing: Ufront uses Trace Modules.  By default it comes with two: TraceToBrowser and TraceToFile.  Both are useful, but I hadn’t anticipated some problems with the way they were designed.  In the ufront stack, modules exist at the HttpApplication level, not at the HttpRequest level.  On PHP (or uncached neko), there is little difference.  Once you introduce caching, or move to a platform like NodeJS – this becomes a dangerous assumption.  Your traces could end up displaying on somebody else’s request.  In light of this, I have implemented a way of keeping track of trace messages in the HttpContext.  My idea was to then have the Controller and RemotingApiClass have a trace() method, which would use the HttpContext’s queue.  Sadly, an instance `trace()` method currently does not override the global `haxe.Log.trace()`, so unless we can get that fixed (I’m chatting with Simon about it on IRC), it might be better to use a different name, like `uftrace()`.  For now, I’ve also made a way for TraceToBrowser to try guess the current HttpContext, but if multiple requests are executing simultaneously this might break.  I’m still not sure what the best solution is here.
    • Error Handling: I tried to improve the error handling in HttpApplication.  It was quite confusing and sometimes resulted in recursive calls through the error stack.  I also tried to improve the visual appearance of the error page.
    • Configuration / Defaults:The UfrontApplication constructor was getting absurd, with something like 6 optional parameters.  I’ve moved instead to having a `UfrontConfiguration` typedef, with all of the parameters, and you can supply, all, some or none of the parameters, and fall-backs will be used if needed.  This also improves the appearance of the code from:new UfrontApplication( true, “log.txt”, Dispatch.make(new Routes()) );

      to

      new UfrontApplication({
      urlRewrite: true,
      dispatchConf: Dispatch.make( new Routes() ),
      logFile: “log.txt”
      });

    • More ideas: last night I had trouble getting to sleep.  Too many ideas.  I sent myself 6 emails (yes 6) all containing new ideas for Ufront.  I’ll put them on the Ufront Trello Board soon to keep track of them.  The ideas were about Templating (similar abstractions and interfaces I have here, as well as ways of optimising them using caching / macros), an analytics module, a request caching module and setting up EasyAuth to work not only for global permissions (CanAccessAdminArea), but also for item-specific permissions: do you have permission to edit this blog post?
    • NodeJS / ClientJS: after using NodeJS earlier in the week, I had some email conversations with both Clement and Eric about using Ufront on NodeJS.  After this week it’s becoming a lot more obvious how this would work, and I’m getting close.  The main remaining task is to support asynchronous calls in these 3 things: Dispatch action execution calls, HttpRemotingConnection calls, and database calls – bringing some of the DB Macros magic to async connections.  But it’s looking possibly now, where as it looked very difficult only 3 months ago.
  • CompileTime: I added a simple CompileTime.interpolateFile() macro.  It basically reads the contents of the file at macro time, and inserts it directly into the code, but it inserts it using String Interpolation, as if you had used single quotes.  This means you can insert basic variables or function calls, and they will all go in.  It’s like a super-quick and super-basic poor-man’s templating system.  I’m already using it for Ufront’s default error page and default controller page.
  • Detox: this one wasn’t this week, but a couple of weeks ago.  I am working on refactoring my Detox (DOM / Xml Manipulation) library to use Abstracts.  It will make for a much more consistent API, better performance, and some cool things, like auto-casting strings to DOM elements:”div.content”.find().append( “<h1>My Content</h1>” );
  • My Work Project: Over the last two weeks I’ve updated SMS (my School Management System project, the main app I’ve been working on) to use the new Ufront.  This is the reason I’ve been finding so much that needs to be updated, especially trying to get my app to work with “ufront.Web.cacheModule”.