Findka marketing, experimenting with Materialize.io, Biff launch prep

Findka

So far this week we’re at 16 active users (up from 14 last week). I tried posting it on Hacker News (and reposting it with mod permission), but it was a dud both times. I have to say, it’s so much easier to get views for Clojure articles I write. I haven’t identified a niche specifically for Findka that I can market to effectively, so I’ll probably just continue writing Clojure articles and hosting them on findka.com. Seems to work well enough.

I also moved the Biff docs to findka.com/biff and added a “Sponsored by Findka” thing on the sidebar, which I guess is true depending on your definition of “sponsored” ha ha. Of all the money I’ve been paid to work on Biff, all of it has come from Findka (vacuously, though I have pre-emptively signed up for Github sponsors).

I was thinking a lot about various methods to promote Findka, but I’ve decided to just focus on writing articles for now because (1) it works, and (2) I enjoy doing it. I hadn’t been paying much attention to the latter point, and I think that was a mistake. My mental model was basically that you have fun building the thing first and then you have to do un-fun things to get people to use it. However, there’s power in structuring things around your strengths and interests, even though I didn’t make that connection with regard to marketing at first.

Biff

I have some exciting news. Last week I experimented with Materialize.io, and I’ve confirmed that it can be integrated with Biff without too much trouble. This means you’ll be able to subscribe to arbitrary SQL queries (Biff’s current subscribable queries, like Firebase’s, don’t allow joins). It’ll work like this:

  1. You specify which Crux documents/Biff tables you want to be continuously imported into Materialize (i.e. Postgres basically).

  2. Using those imported tables as sources, you write SQL queries, the output of which will be kept up-to-date efficiently by Materialize—and the queries can be as complex as you like.

  3. The query outputs will be exported back into Clojure data structures with their own Biff tables which you can then subscribe to using Biff’s existing subscription method.

As an example of what you could do with this, check out the stats section in Findka’s sidebar (it won’t be visible if you’re on mobile). Write now I recompute those stats every 5 minutes from scratch. With Materialize, those stats could be incrementally updated whenever new data comes in. So all I’d have to do is write a few SQL queries, and then the results will stay up-to-date even at scale.

Now, at Findka’s current scale, there’s no need for Materialize. I could change the stats to be recomputed from scratch after every document write and it would still be plenty fast right now. On top of that, it’ll take more work for the Materialize integration to actually be useful at scale. For one thing, Biff itself only works on a single machine currently, and I’ll be using Materialize’s somewhat-janky CSV import feature instead of going through Kafka.

But I’m still excited about integrating with Materialize at this stage because it has a path to scalability. People can start building applications using the integration, and as I continue to work on Biff, the queries will be able to scale without users restructuring their applications.

Plus, Materialize is just plain cool.

Other than that, I’ve been going through some of the Biff issues in preparation for release. Mainly I’ve reorganized a lot of the namespaces and moved some of the code to Trident.

Published 9 Jun 2020

I write an occasional newsletter
about my work and ideas.

RSS feed · Archive

𝔗𝔥𝔦𝔰 𝔰𝔦𝔱𝔢 𝔦𝔰 𝔭𝔯𝔬𝔱𝔢𝔠𝔱𝔢𝔡 𝔟𝔶 𝔯𝔢𝔠𝔞𝔭𝔱𝔠𝔥𝔞 𝔞𝔫𝔡 𝔱𝔥𝔢 𝔊𝔬𝔬𝔤𝔩𝔢 𝔓𝔯𝔦𝔳𝔞𝔠𝔶 𝔓𝔬𝔩𝔦𝔠𝔶 𝔞𝔫𝔡 𝔗𝔢𝔯𝔪𝔰 𝔬𝔣 𝔖𝔢𝔯𝔳𝔦𝔠𝔢 𝔞𝔭𝔭𝔩𝔶.