Projects: Category Archive (Page 2)

Posts about software development projects from or maintained by Bit Badger Solutions

Sunday, November 28, 2021
  A Tour of myPrayerJournal v3: The User Interface

NOTE: This is the second post in a series; see the introduction for information on requirements and links to other posts in the series.

If you are a seasoned Single Page Application (SPA) framework developer, you likely think about interactivity in a particular way. Initially, I focused on replacing each interactive piece in isolation. In the end, though, requests for “pages” returned almost everything but the HTML head info and the displayed footer - and I was happy about it. Keep that in mind as I walk you down the path I have already traveled; keep an open mind, and read to the end before forming strong opinions either way.

The $.05 Tour of Pug and Giraffe View Engine

Understanding the syntax of both Pug and Giraffe View Engine will help you if you click any of the source code example links. While a complete explanation of these two templating languages would make this long post much longer than it already is, here are some short examples of their syntax. Using a string variable who with the contents “World”, we will show both languages rendering:

<p id="example" class="greeting">Hello <strong>World</strong></p>

myPrayerJournal v2 used Pug templates in Vue to render the user interface. Pug uses indentation-sensitive tag/content pairs (or blocks), with JavaScript syntax for attributes, to generate HTML. To generate the example paragraph, the shortest template would look like:

p.greeting(id="example") Hello #[strong= who]

myPrayerJournal v3 uses Giraffe View Engine, which uses F# lists to generate HTML from a very HTML-looking domain-specific language (DSL). The example paragraph would be generated with:

p [ _id "example"; _class "greeting" ] [ str "Hello "; strong [] [ who ] ]

Given those examples, let's dig into the conversion.

The Menu

The menu across the top of the application was one of the first items I needed to convert. The menu needs to be different, depending on whether there is a user logged on or not. Also, if a user is logged on, the menu can still be different; the “Snoozed” menu item only appears if the user has any snoozed requests. The application uses Auth0 to manage users (which is how it is open to Microsoft and Google accounts), and I wanted to preserve this; my requests are tied to the ID provided by Auth0, so that did not need to change.

In the Vue version, the system used Auth0's SPA library that exposed whether there was a user logged on or not. Also, once a user was logged on, the API sent all the user's active requests, which included snoozed requests; once this API call returned, the application can turn on the “Snoozed” menu item. In the htmx version, though, this information is all generated on the server. My initial process was to use an hx-get to get the menu HTML snippet, using an hx-trigger of load to fill in this spot of the page when the page was loaded. I also (initially) implemented a custom HTML header to include in responses, and if that header was found, I would trigger a refresh on the menu; the eventual solution included the navbar in “page” refreshes.

(See the Vue “Navigation” component that became the Giraffe View Engine “navbar” function)

“New Page” in htmx

This leads directly into a discussion of how myPrayerJournal is still considered a SPA. In the Vue version, “pages” were Vue Single-File Components (SFCs) under the /components directory. (In the years since myPrayerJournal v1, the default Vue template has changed to place these SFCs under /views, while /components is reserved for shared components.) These view components rendered into a custom component within the main tag (using Vue router's router-view tag), while the nav component was reactive, based on the user logging on/off and snoozing requests.

In myPrayerJournal v3, “page” views target the #top section element. If the request is for a full page load, the HTML head content is rendered, as is the body's footer content; none of these change until a new version of the application is released. If the request is an htmx request, though, the only thing rendered is a new #top section, which includes the navigation bar and the page content. While this does approach a full “page load”, there are some key differences:

  • The page contents are refreshed based on one HTTP request (no extra request or processing required for the navbar);
  • The HTML head content is responsible for most of the large HTTP requests, such as those for JavaScript libraries (and is excluded from non-full-page views);
  • The page footer is not included.

Note the difference between the full view layout and the partial layout. Also, within the application's request handlers, there is a partial return function that determines whether this is an htmx-initiated page view request (in which case a partial view is returned) or a full page request (which returns the entire template).

Updating the Page Title

One of the most unexpectedly-vexing parts of a SPA is determining how the browser's title bar will be updated when navigation occurs. (I understand why it's challenging; what I do not understand is why it took major frameworks so long to devise a built-in way of handling this.) Coming from that world, I had originally implemented yet another custom header, pushing the title from the server, and used a request listener to update the title if the header was present. As I dug in further, though, I learned that htmx will update the document title any time a request payload has an HTML title element in its head. If you look at both layouts in the preceding paragraph, you'll notice that they include a head element with a title tag. This is how easy it should be, and with htmx, this is how easy it is.

At this point, there is a pattern emerging. The thought process behind an htmx-powered website is much different than a JavaScript-based SPA framework; and, in the majority of cases, it has been less complex. Now, let me contradict what I just said.

POST-Redirect-GET

In myPrayerJournal v2, updating a prayer request followed this flow:

  • Display the edit page, with the request details filled in
  • When the user saved the request, return an empty 200 OK response
  • Using Vue, display a notification, refresh the journal, then re-render the page where the user had been when they clicked “Edit” (there are multiple places from which requests can be edited)

While there are no redirects here, this is the classic traditional-web-application scenario where the “POST-Redirect-GET” (P-R-G) pattern is used. By using this pattern, the “Back” button on the browser still works. If you try to go back to the result of a POST request, the browser will warn you that your action will result in the data being resubmitted (not usually what you want to do). By redirecting, though, the result of a POST becomes a GET, which does not change any data. For traditional web applications, this is the user-friendliest way to handle updates.

In the htmx examples, they show an example of inline editing. This led to my first plan - change the request edit “page” to be a component, where the HTML for the displayed list was replaced by the form, and then the “Save” action returns the new HTML. This requires no P-R-G, as these actions have no effect on the “Back” button. It worked fine, but there were some things that weren't quite right:

  • New requests needed their own page; I was going to have to duplicate the edit form for the “new” page, and introduce some complexity in determining how to render the results.
  • Some updates required refreshing the list of requests, not just replacing the text and action buttons.

At this point, I was also starting to realize “if you think something is hard to do in htmx, you probably aren't trying to do it correctly.” So, I decided to try to replicate the “edit page” flow of v2 in v3. Creating the page was easy enough, and I was able to use the returnTo parameter in the function to both provide a “Cancel” button and redirect the user to the right place after saving. Easy, right? Well… Not quite.

htmx uses XMLHttpRequest (XHR) to send its requests, which has some interesting behavior; it follows redirects! When I submitted my form, it received the request (with htmx's HX-Request header set), and the server returned the redirect. XHR saw this, and followed it; however, it used the same method. (It was POSTing to the new URL.) The fix for this, though, was not easy to find, but easy to implement; use HTTP response code 303 (see other) instead of 307 (moved temporarily). Using this, combined with using hx-target="#top" on the form, allowed the P-R-G pattern to work successfully without double-POSTing and without a full-page refresh.

htmx Support in Giraffe and Giraffe View Engine

As I developed this, I was also building up extensions for Giraffe to handle the htmx request and response headers, as well as the attributes needed to generate htmx-aware markup from Giraffe View Engine. This project, called Giraffe.Htmx, is now published on NuGet. There are two packages; Giraffe.Htmx provides server-side support for the request and response headers, and Giraffe.ViewEngine.Htmx provides support for generating htmx attributes. I wrote about it when it was released, so I won't rehash the entire thing here.

Final UI Thoughts

htmx is much less complex than any other front-end JavaScript SPA framework I have ever used - which, for context, includes Angular, Vue, React, Ember, Aurelia, and Elm. Both in development and in production use, I cannot tell that the payloads are slightly larger; navigation is fast and smooth. Though I have yet to change anything since going live with myPrayerJournal v3, I know that maintenance will be quite straightforward (to be further explored in the conclusion post).

The UI for myPrayerJournal uses Bootstrap, a UI framework which has its own script, and htmx plays quite nicely with it. The next post in this series will describe how I interact with both Bootstrap and htmx, using modals and toasts on this “traditional” web application.

Categorized under , , , ,
Tagged , , , , , , , , , , , , , , , , , ,

Friday, November 26, 2021
  A Tour of myPrayerJournal v3: Introduction

This is the first of 5 posts in this series.

Background

Around 3 years ago, I wrote an 8-part series called “A Tour of myPrayerJournal”, recounting the decisions and implementation of its initial release. Version 2 did not get its own tour, as it used a similar architecture. There were also some nagging library issues that were never resolved, leading to v2 being an overall unsatisfying step in the evolution of this application.

When Vue v3 was announced, this sounded like a great opportunity, with first-class TypeScript support and a new component syntax that promised better performance and a better developer experience. This past summer, I completed a project with the mature Vue v3 framework, and was generally pleased with the results. Just after I returned to my previously abandoned migration attempt on this project (with early Vue v3 support), I heard about htmx. With a few attributes, and a server that can handle a few HTTP headers, you can build an interactive site, with performance rivaling or exceeding that of the typical Single Page Application (SPA) - or, at this point, so they claimed.

I also picked up LiteDB on another project over the summer, and it worked well. I thought, why not give these technologies a try, and see if I would like the result?

(SPOILER ALERT: I did!)

The Requirements

Requirements for v3 were, for the most part, to update the application to Vue v3. Without rehashing the entire list (see the other intro post), the basic idea is that a prayer request is represented by a card, and this card keeps up with all changes made to it. Also, the system can present the cards that are active, arranged with the oldest action date first, and allow you to tick through the cards. (This is the flow to enable the user to "pray through their list.")

The goal is to remain a minimalist program; the focus should be on prayer, not using a website. To that end, I had envisioned a “one-at-a-time” scenario that would clear out distractions and present the cards in the same order. I had also planned to separate the “last prayed” date from the “last activity” date; currently, updating the text of a request moves it to the bottom of the stack. However, both of these improvements were deferred to v3.1; v3 restores the (adequate) functionality of v1, while being much lighter-weight.

The Tech Stack

This stack did not go through nearly as many iterations as v1.

Giraffe is a library that enables F# developers to create ASP.NET Core endpoints in a functional style. It's a mature library (v1 used Giraffe!), and continues to be improved. It also provides an optional “Giraffe View Engine,” which will get more attention in the user interface post; the views for v3 are produced via this view engine.

htmx is a JavaScript library that asks… well, several questions. Why should links and buttons be the only interactive elements? Why should you have to replace the whole page every time? What would HTML look like if it had been developed the way a typical programming language would be? It uses a small set of attributes to answer these questions differently, making interactive sites possible without writing any JavaScript. (The custom JavaScript file in v3 is 82 lines, including comments - and the majority of that is Bootstrap interaction.)

Since, in the htmx way, the web server returns rendered HTML, the requests can be a bit larger than the equivalent API calls that return JSON for a SPA framework to render. However, this is offset somewhat by the fact that the browser just has to swap that HTML fragment in; the processing is faster and much less complex.

What really swung me over the fence to giving it a shot, though, was a point Carson (the author of the library) made while talking with Carl and Richard on the .NET Rocks! podcast. Having a server render the HTML, and the browser merely displaying it, keeps your application logic on the server; the only JavaScript you need to write is what is required for the user interface. This eliminates a host of synchronization issues with SPAs and their associated APIs - duplicating shapes of data, ensuring calculations are in sync, etc. It also keeps your application logic from needing to be exposed to the public Internet; this doesn't entirely prevent exploits, but the prospective hacker doesn't start with a full copy of your code.

LiteDB could be described as SQLite for documents. Collections of Plain-Old CLR Objects (POCOs) can be stored, retrieved, searched, indexed, and deleted, all while running in the current process, and requiring no separate database server install. While it does not require any special configuration to do this, it does also provide the ability to transform these objects. This gives complete control as to how much or how little transformation you want to specify; and, as we'll see in part 3, this came in handy for this application.

Where We Go from Here

In the next post, we'll take a look at Giraffe, its View Engine, htmx, and how they all work together. The post after that will dive into the aforementioned 82 lines of JavaScript to see how we can control Bootstrap's client/browser behavior from the server. After that, we'll dig in on LiteDB, to include how we serialize some common F# constructs. Finally, we'll wrap up the series with overarching lessons learned, and other thoughts which may not fit nicely into one of the other posts.

Categorized under , , , ,
Tagged , , , , , , , , ,

Tuesday, November 9, 2021
  Introducing Giraffe.Htmx

Giraffe is a library that sits atop ASP.NET Core and allows developers to build web applications in a functional style; dotnet new giraffe is literally my starting point when I begin a new web application project. (Rather than write three more sentences filled with effusive praise, I'll just leave it at that; it's great.) It also provides a view engine (that builds upon Suave's “experimental” view engine) which uses an F# DSL to define HTML in a strongly-typed way. It has been incredibly efficient for a while, but with .NET's work over the past two releases at improving performance, and Giraffe's adoption of those techniques, it is lightning fast.

htmx is a library that brings interactivity to HTML through the use of attributes and HTTP headers. Whereas projects like Vue, Angular, and React prescribe completely different programming paradigms than traditional web development, htmx provides partial-page-swapping and progressive enhancement within straight HTML. This brings a lot of the benefits of the SPA architecture to vanilla HTML, without requiring a completely different paradigm than the one we have used on the web for 30 years. In practice, this greatly reduces the complexity required to produce an interactive web application.

The Giraffe.Htmx project provides a bridge between these two libraries. The project consists of two different NuGut packages.

  • Giraffe.Htmx provides extensions to Giraffe (and its exposure of ASP.NET Core's HttpContext) that expose the request headers which htmx uses, and provides Giraffe-style HttpHandlers to set htmx's recognized response headers. The request headers are exposed as Options, and if present, are converted to the expected type. Response headers can be set in a similar way (i.e., passing true instead of "true" for a boolean header).
  let myHandler : HttpHander =
    fun next ctx ->
      match ctx.Request.HxPrompt with
      | Some prompt -> ... // do something with the text the user provided
      | None -> ... // the user provided no text (likely was not even prompted)
      ...
  • Giraffe.ViewEngine.Htmx extends Giraffe's view engine with attribute functions (ex. _hxBoost equates to hx-boost="true") to generate HTML with htmx attributes. As with the headers, the values for each attribute are expected in their strongly-typed form, and the library handles the necessary string conversion. For attributes that have a defined set of values, there are also modules that provide those values; the example below demonstrates both the attributes and the HxTrigger module.
  let autoLoad =
    div [ _hxGet "/this/endpoint"; _hxTrigger HxTrigger.Load ] [ str "Loading..." ]  

Head over to the project site for NuGet links and more examples!

p.s. As of this writing, the current (and only) version of this library is at v0.9.1. Both libraries should be ready for development use. For Giraffe.ViewEngine.Htmx, I intend to write helpers for hx-headers and hx-vals that will allow a list of string * string tuples to be passed. I also need to write READMEs for both NuGet packages. Once those are done, this will be v1-ready.

Categorized under , ,
Tagged , , ,

Sunday, September 2, 2018
  A Tour of myPrayerJournal: Conclusion

NOTE: This is the final post of an 8-post series; see the introduction for all of them, and the requirements for which this software was built.

Over the course of this tour, we've meandered through client side code, server side code, a database, and documentation. However, the experience of developing an application is more than just the sum of the assembled technologies and techniques. Let's take a look at some of these “lessons learned” and opinions formed through this process. (This post will use the first-person singular pronouns “I” / “me” / “my” a lot more than the previous ones.)

Vue Is Awesome – But…

As I tried different SPA frameworks, they were interesting and fun, but a lot more work than I expected. Then, I came across Vue, and its paradigm and flow just clicked. Single file components are great; it was so nice to not have to go digging through a site-wide CSS file looking for styles that affected the elements in the component. I just had to scroll down! While I did put the common CSS in App.vue, the application's top component, anything unique that component did was right there. There are also all kinds of Vue-aware packages that you can add and use, that add their own elements/components to the process; Element UI, Bootstrap Vue, and Vue-Awesome were three of the ones I used at different points in development. Since it's a JavaScript framework, you can use vanilla JS packages as well; myPrayerJournal uses moment.js to display relative dates (“last activity 8 minutes ago”) and format dates for display.

Then… I ran the build process, and the bundles were huge – as in, multiple megabytes huge! We changed our implementation of Vue-Awesome to only import the icons we used in the application (to be fair to them, that is the project's recommendation). Element also seemed to be rather heavy. One of the last issues I worked before final release was removing Bootstrap Vue and just using straight HTML/CSS for layout and flow (which is another lesson we'll explore below). There are guides on configuring Webpack to help make moment's bundle smaller (and that project has an open issue to refactor so that it's more friendly to a “just import the part you need” paradigm).

None of this is meant as a knock of any of the fine projects I've named up to this point. It was also near the end of the project when I converted to the Vue CLI v3 template, which uses a version of Webpack that has a much better “tree-shaking” algorithm. (This means that, if it can establish that code is never executed, it excludes it from the bundle.) myPrayerJournal v1.0's modern “vendor” bundle (the one with the libraries) is 283K, while the legacy bundle is 307K; while that's not lightning fast on mobile, it's also comparable to a lot of other sites, and since page updates happen through the API, it is fast once it's loaded.1

Lesson: Be smart about what you import.

Lesson: The JavaScript ecosystem evolves quickly. This was published September 2nd, 2018, describing development that occurred September 2016 through August 2018; a good bit of this is likely already obsolete. :)

You Might Not Need…

We mentioned above that the site eventually was written with simple HTML and CSS. Many of the more popular packages and utilities were created to make up for deficiencies, either in the browser ecosystem or among the differing browser vendors. With the recent efforts by browser vendors to support published standards, though, many of these packages are used for reasons that distill to comfort and inertia. As before, this is not a knock on these projects; they filled a definite need, and continue to work as the basis for a lot of deployed, executing code.

For new development, though, existing standards – and their support – may be sufficient. There are some great sites that detail how certain things can be done using plain JavaScript or CSS.

I used the last one quite a bit. I also extensively referred to CSS Tricks' “A Complete Guide to Flexbox” post. When I decided to rework the layout without Bootstrap, I thought the replacement would be CSS Grid; however, Flexbox was more than enough.

Lesson: Use a framework if you want, but don't assume it's the only way to do things.

Lesson: If you want to shrink your bundle size, 20-30 lines of your own code can sometimes save you 20-30K (or more).

Learn Go

Ladies and gentlemen of the Internet class of 2018 -
  Learn Go.
If I could offer you only one tip for the future,
   Go would be it.
with apologies to Baz Luhrmann

Go is a systems programming language. It was developed at Google, to help them better utilize their hardware. It natively supports concurrent processing (which can be done in parallel, but is distinct from “parallel programming”); has an opinionated code formatter; forces you to address calls that may error; and is terribly efficient. When myPrayerJournal was running with the Go backend, the working size in RAM was around 10MB. Let me say that again, this time with feeling - the working size for a database-accessing, HTTP-listening, dynamic web service was 10MB of RAM! If you have ever profiled a web server process, you know that it's nearly ludicrous how small this is. For comparison, the process working set for the F#/Giraffe/EF Core version of the backend runs between 60-80MB, and includes another ~256MB of shared working set memory.2 (An Apache2 process running PHP can run in the 256MB range as well.)

Why am I recommending a technology that I ultimately moved away from before the v1.0 release? Well, other than "did you read the last paragraph?!?!", the short answer is “it's the future, and will change how you code in every other language.” The fact that it forces you to deal with every single thing that may error makes it robust; but, if you learn to develop with it, you will find yourself thinking about error handling more fully than you did before – and I say this as a person who already coded error handlers as I coded the happy path.

Why did I move away from a technology on which I'm so bullish? Well, for starters, F# is the language that “clicks” for me in the same way that Vue did as a client-side framework; its development paradigm just makes sense with the way I think about structuring code. I completed a project that used F# and Giraffe (which I hope to take open-source soon), and that gave me the confidence to move forward with a third attempt at an F# API. (Third time's the charm, right?) Also, while Entity Framework generated some pretty verbose SQL statements, EF Core more or less generates what I would have written anyway, plus it takes care of populating the objects it returns from the database.

I also found the development process awkward, though not unwieldy. (They're probably not going to adopt that as their slogan…) Much has been written about the GOPATH environment variable, but once you get into it, it starts to make sense. go get is great at pulling down your dependencies, and the way it does it, you can peruse the source code to see exactly what they are doing. Also, it was not too difficult to develop on Windows, but build executables for Linux – which, in the “systems” programming work, is a really cool feature.

Lesson: Learn Go.

Write about Your Code

This wasn't a lesson I learned on this project; this was one I'd known for a while. There are (at least) two distinct benefits to writing about your code:

  • It can help you learn even more than you may have learned when you were writing the code itself. As developers, we sometimes forget to step back and look at the body of work we've created. If you write about it, you have to form a coherent view about it so you can explain it to others; this helps you long-term. Also, people who know more about the environment can chime in on what you've written, which also not only helps you learn, ...
  • It helps build community. If you hit a snag, and someone in the community around that technology helps you get past it, writing about your experience means that the next person to encounter that issue may not have to ask; if their searching leads them to your post, they can fix it and move on. This applies doubly if you could not get help from the community; you might be the one who surfaces this issue/technique, and moves the entire community forward.

Lesson: Write about your code; participate in the community around your technology to whatever extent you are able.

 

If you've been with us for this entire tour – thank you. I hope you've learned something; I know I have, not just through the development of myPrayerJournal, but through the course of writing about it. And, certainly, if you feel that this application could help you in any way, help yourself. It is and will always be free, and Bit Badger Solutions (and DJS Consulting before it) has, as of this writing, a 14-year streak of no known data breaches; your prayer requests are safe with us.


1 There are chunk-splitting techniques that can be used to make the initial download smaller, and load other portions on-demand. Moment.js, for example, isn't needed for the default “Welcome to myPrayerJournal” page. We could defer loading that until the user has logged in, as the journal page definitely needs it; we'd still have to download the same amount, but it would be spread out across 2 requests. Opportunities to save some size in the initial download are still out there, but 283K is just above the 244K suggested bundle size, so we went forward with it.

2 The server on which I host myPrayerJournal already has other .NET Core processes running on it, so the shared memory size has already been allocated.

Categorized under , , ,
Tagged , , , , , , , , , , , , , , , , ,

Saturday, September 1, 2018
  A Tour of myPrayerJournal: Documentation

NOTES:

  • This is post 7 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

We have spent a lot of time looking at various aspects of the application, written from the perspective of a software developer. However, there is another perspective to consider - that of a user. With a “minimalist” app, how do we let them know what the buttons do?

Setting Up GitHub Pages

In other projects, we had use GitHub Pages by creating a gh-pages branch of our repository. This has some advantages, such as a page structure that is completely separate from the source files. At the same time, though, you either have to have 2 sandboxes on different branches, or switch branches when switching from coding to documentation. Fortunately, this is not the only option; GitHub Pages can publish from a /docs folder on the master branch as well. They have a nice guide on how to set it up.

The _config.yml file that's also in the /docs folder (mpj:docs folder) was put there by GitHub when we selected the theme from the Settings page of the repo. The only file we ever put there was index.md (mpj:docs/index.md).

Writing the Documentation

As GitHub Pages uses Jekyll behind the scenes, we have Markdown and the default Liquid tags available to us. We didn't need any Liquid tags, though, as the documentation is pretty straightforward. You may notice that there isn't any front matter at the top of index.md; absent that, GitHub Pages selects the title of the page from the first h1 tag in the document, defined with a leading # sequence.

If you browse the commit history for index.md, you'll see that many of the commits reference either a version or issue number. This makes it rather simple to go back in time through the source code, and not only review the code, but see how its functionality was explained in the documentation. You can also review typos that got committed, which helps keep you humble. :)

Making It Better

There is more that could be done to improve this aspect of the project. The first would be to assign it a subdomain of prayerjournal.me instead of serving the pages from bit-badger.github.io. GitHub Pages does support custom subdomains, and even supports HTTPS for these through Let's Encrypt. The second would be to work up a lightweight theme for the page that matched the look-and-feel of the main site; this would make a more unified experience. Finally, while “minimalist” was the goal, there ended up being a few features that took lots of words to explain; a table of contents on the page would help people navigate directly to the help they need.

 

Our tour is drawing to a close; next time, we'll wrap it up with observations and opinions over the development process.

Categorized under ,
Tagged , , , , , , ,

Friday, August 31, 2018
  A Tour of myPrayerJournal: The Data Store

NOTES:

  • This is post 6 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

Up to this point in our tour, we've talked about data a good bit, but it has all been in the context of whatever else we were discussing. Let's dig into the data structure a bit, to see how our information is persisted and retrieved.

Conceptual Design

The initial thought was to create a document store with one document type, the request. The request would have an ID, the ID of the user who created it, and an array of updates/records. Through the initial phases of development, our preferred document database (RethinkDB) was going through a tough period, with their company shutting down; thankfully, they're now part of the Linux Foundation, so they're still around. RethinkDB supports calculated fields in documents, so the plan was to have a few of those to keep us from having to retrieve or search through the array of updates.

We also considered a similar design using PostgreSQL's native JSON support. While it does not natively support calculated fields, a creative set of indexes could also suffice. As we thought it through a little more, though, this seemed to be over-engineering; this isn't unstructured data, and PostgreSQL handles max-length character fields very well. (This is supposed to be a “minimalist” application, right?) A relational structure would fit our needs quite nicely.

The starting design, then, used 2 tables. request had an ID and a user ID; history had the request ID, an “as of” date, a status (created, updated, etc.), and the optional text associated with that update. Early in development, the journal view brought together the request/user IDs along with the latest history entry that affected the text of the request, as well as the last date/time an action had occurred on the request. When the notes capability was added, it got its own note table; its structure was similar to the history table, but with non-optional text and without a status. As snoozing and recurrence capabilities were added, those fields were added to the request table (and the journal view).

The final design uses 3 tables, 2 of which have a one-to-many relationship with the third; and 1 view, which provides the calculated fields we had originally planned for RethinkDB to calculate.

Database Changes (Migrations)

As we ended up using 3 different server environments over the course of this project, we ended up writing a DbContext class based on our existing structure. For the Node.js backend, we created a DDL file (mpj:ddl.js, v0.8.4+) that checked for the existence of each table and view, and also had the SQL to execute if the check failed. For the Go version (mpj:data.go, v0.9.6+), the EnsureDB function does a similar thing; looking at line 347, it is checking for a specific column in the request table, and running the ALTER TABLE statement to add it if it isn't there.

The only change that was required since the F#/Giraffe backend has been in place was the one to support request recurrence. Since we did not end up with a scaffolded EF Core initial migration/model, we simply wrote a SQL script to accomplish these changes (mpj:sql directory).1

The EF Core Model

EF Core uses the familiar DbContext class from prior versions of Entity Framework. myPrayerJournal does take advantage of a feature that just arrived in EF Core 2.1, though - the DbQuery type. DbSets are collections of entities that generally map to an underlying database table. They can be mapped to views, but unless it's an updateable view, updating those entities results in a runtime error; plus, since they can't be updated, there's no need for the change tracking mechanism to care about the entities returned. DbQuery addresses both these concerns, providing lightweight read-only access to data from views.

The DbContext class is defined in Data.fs (mpj:Data.fs), starting in line 189. It's relatively straightforward, though if you have only ever seen a C# model, it's a bit different. The combination of val mutable x : [type] and the [<DefaultValue>] attribute are the F# equivalent of C#'s [type] x; declaration, which creates a variable and initializes reference types to null. The EF Core runtime provides these instances to their setters (lines 203, 206, 209, and 212), and the application code uses them via the getters (a line earlier, each).

The OnModelCreating overridden method (line 214) is called when the runtime first creates its instance of the data model. Within this method, we call the .configureEF function of each of our database types. The name of this function isn't prescribed, and we could define the entire model without even referencing the data types of our entities; however, this technique gives us a “configure where it's defined” paradigm with each entity type. While the EF “Code First” model creates tables that don't need a lot of configuring, we must provide more information about the layout of the database tables since we're writing a DbContext to target an existing database.

Let's start out by taking a look at History.configureEF (line 50). Line 53 says that we're going to the table history. This seems to be a no-brainer, but EF Core would (by convention) be expecting a History table; since PostgreSQL uses a different syntax for case-sensitive names, these queries would look like SELECT ... FROM "History" ..., resulting in a nice “relation does not exist” error. Line 54 defines our compound key (requestId and asOf). Lines 55-57 define certain properties of the entity as required; if we try to store an entity where these fields are not set, the runtime will raise an exception before even trying to take it to the database. (F#'s non-nullability makes this a non-issue, but it still needs to be defined to match the database.) Line 58 may seem to do nothing, but what it does is make the text property immediately visible to the model builder; then, we can define an OptionConverter<string>2 for it, which will translate between null and string option (None = null, Some [x] = [x]). (Lines 60-61 are left over from when I was trying to figure out why line 62 was raising an exception, leading to the addition of line 58; they could safely be removed, and will be for a post-1.0 release.)

History is the most complex configuration, but let's take a peek at Request.configureEF (line 126) to see one more interesting technique. Lines 107-110 define the history and notes collections on the Request type; lines 138-145 define the one-to-many relationship (without a foreign key entity in the child types). Note the casts to IEnumerable<x> (lines 138 and 142) and obj (lines 140 and 144); while F# is good about inferring types in a lot of cases, these functions are two places it is not. We can use the :> operator for the cast, because these types are part of the inheritance chain. (The :?> operator is used for potentially unsafe casts.)

Finally, the attributes above each record type need a bit of explanation; each one has [<CLIMutable; NoComparison; NoEquality>]. The CLIMutable attribute creates a no-argument constructor for the record type, which the runtime can use to create instances of the type. (The side effect is that we may get null instances of what is expected to be a non-null type, but we'll look at dealing with that a bit later.) The NoComparison and NoEquality attributes keep F# from creating field-level equality and comparison methods on the types. While these are normally helpful, there is an edge case where they can raise NullReferenceExceptions, especially when used on null instances. As these record types are simply our data transfer objects (both from SQL and to JSON), we don't need the functionality anyway.

Reading and Writing Data

EF Core uses the “unit of work” pattern with its DbContext class. Each instance maintains knowledge of the entities it's loaded, and does change tracking against those entities, so it knows what commands to issue when .SaveChanges() (or .SaveChangesAsync()) is called. It doesn't do this for free, though, and while EF Core does this much more efficiently than Entity Framework proper, F# record types do not support mutation; if req is a Request instance, for example, { req with showAfter = 123456789L } returns a new Request instance.

This is the problem whose solution is enabled by lines 227-233 in Data.fs. We can manually register an instance of an entity as either added or modified, and when we call .SaveChanges(), the runtime will generate the SQL to update the data store accordingly. This also allows us to use .AsNoTracking() in our queries (lines 250, 258, 265, and 275), which means that the resultant entities will not be registered with the change tracker, saving that overhead. Notice that we don't specify that on line 243; since Journal is defined as a DbQuery instead of a DbSet, we get change-tracking-avoidance for free.

Generally speaking, the preferred method of writing queries against a DbContext instance is to define extension methods against it. These are static by default, and they enable the context to be as lightweight as possible, while extending it when necessary. However, since this context is so small, we've created 6 methods on the context that we use to obtain data.

If you've been reading along with the tour, we have already seen a few API handler functions (mpj:Handlers.fs) that use the data context. Line 137 has the handler for /api/journal, the endpoint to retrieve a user's active requests. It uses .JournalByUserId(), defined in Data.fs line 242, whose signature is string -> JournalRequest seq. (The latter is an F# alias for IEnumerable<JournalRequest>.) Back in the handler, we use db ctx to get the context (more on that below), then call the method; we're piping the output of userId ctx into it, so it gets its lone parameter from the pipe, then its output is piped to the asJson function we discussed as part of the API.

Line 192, the handler for /api/request/[id]/history, demonstrates both inserting and updating data. We attempt to retrieve the request by its ID and the user ID; if that fails, we return a 404. If it succeeds, though, we add a history entry (lines 201-207), and optionally update the showAfter field of the request based on its recurrence. Finally, the call on line 212 commits the changes for this particular instance. Since the .SaveChanges[Async]() methods return the number of records affected, we cannot use the do! operator for this; F# makes you explicitly ignore values you aren't either returning or assigning to a name. However, defining _ as a parameter or name demonstrates that we realize there is a value to be had, we just are not going to do anything with it.

We mentioned that CLIMutable record types could be null. Since record types cannot normally be null, we cannot code something like match [var] with null -> ...; it's a compiler syntax error. What we can do, though, is use the box operator. box “boxes” whatever value we have into an object container, where we can then check it against null. The function toOption in Data.fs on line 11 does this work for us; throughout the retrieval methods, we use it to return options for items that are either present or absent. This is why we could do the match statement in the /api/request/[id]/history handler against Some and None values.

Getting a DbContext

Since Giraffe sits atop ASP.NET Core, we use the same technique; we use the .AddDbContext() extension method on the IServiceCollection interface, and assign it when we set up the dependency injection container. In our case, it's in Program.fs (mpj:Program.fs) line 50, where we also direct it to use a PostgreSQL connection defined by the connection string “mpj”. (This comes from the unified configuration built from appsettings.json and appsettings.[Environment].json.) If we look back at Handlers.fs, lines 45-47, we see the definition of the db ctx call we used earlier. We're using the Giraffe-provided GetService<'T>() extension method to return this instance.

 

Our tour is nearing its end, but we still have a few stops to go. Next time, we'll look at how we generated documentation to tell people how to use this app.


1 Writing this post has shown me that I need to either create a SQL creation script for the repo, or create an EF Core initial migration/model, so the database ever has to be recreated from scratch. It's good to write about things after you do them!

2 This is also a package I wrote; it's available on NuGet, and I also wrote a post about what it does.

Categorized under , , , ,
Tagged , , , , , , , , , , , , , , , , , , , , ,