AmitU's Journal - May 2023

wasm code generation from Rust

20th May 2023

So we are trying to compile ftd to wasm. How do we do this? Some things are simple, functions and components are going to get compiled/transpied into wasm, we generate WAT or WAST files.

Records, Strings, etc are going to get compiled away into mallocs and pointer arithmatic, and we have to export or implement some functions like array_new (), string_new() etc, all the helpers we need.

The challenge is in variables and data. Lets look at them. First we have what we call global variables. If you declare a variable in module, its effectively a module scoped global variable.

Before we proceed further let's look at what is happening in ftd compilation/interpreter pipeline. Given a string we first parse it in p1, and then call ftd::interpreter() on it. The interpreter returns a Document .. (TODO).

malloc with wasm

17th May 2023

What is this thing?
(data (i32.const 0x42) "Hello, Reference Types!
")

Been researching how malloc is implemented in wasm.

Source

So wasmtime::Caller, which we have access from host function handlers, can be used where we need wasmtime::Store.
let input: &[u8] = ...;
linker
    .func_wrap("my_host", "get_input_size", move || -> i32 {
        input.len() as i32
    })
    .unwrap();

linker
    .func_wrap(
        "my_host",
        "get_input",
        move |mut caller: Caller<'_, WasiState>, ptr: i32| {
            let mem = caller.get_export("memory").unwrap().into_memory().unwrap();
            let offset = ptr as u32 as usize;
            mem.write(&mut caller, offset, input).unwrap();
        },
    )
    .unwrap();
Which means we can recursively call into wasm from Rust. Malloc then becomes trivial, say you exported foo, which needs to create and return a string to wasm, when foo is called we call malloc exported from wasm, and initialise the memory and return the pointer to it.

fastn-runtime

12th May 2023

I have renamed our fastn-surface discord channel, and the fastn-surface crates to fastn-runtime. I was thinking we can create just the UI layer, and runtime would be a separate crate. Surface for a thing to draw on, and I thought maybe our drawing utilities can be used by other.

Turns out we can not do that easily. Drawing requires optimisation, like what event changed what part of UI. Also drawing surface is also the source of events, and event handling has to be done to update the UI, so the separation did not prove easy. There is too much shared data between UI state and runtime state, that combining the two into once unified concern appears easy.

We can still aim for creating a crate that can be used independent of ftd or fastn crates, so you can build your own UI. But even that is a bit tricky. Problem is our data layer. Our data layers is quite tied to the UI layer, if you want to use our UI stuff, you have to model your UI state data using our data layer. Which means you have to have concepts of modules, and variables, and you have to have the exact types that FTD exposes.

Assuming you can do that, there is one more concern, the functions. The UI triggers functions, and functions change data, and that triggers UI updates. This is what runtime is responsible for. So runtime has to be able to execute the functions as well. But how do we share function definition across crates? Either we say fastn-function is a separate crate that you can take a dependency on, and that crate will contain function parser etc. But this leads to another issue, can the fastn-function have direct access to ftd stuff? Of so we end up bringing ftd back as a dependency. Or we create a function syntax and abstraction, which does not know about ftd, and ftd related operations are "registered" when using it with ftd.

Technically we can try that. fastn-function, which defines a function layer. But then the function layer will need access to data which is managed by runtime and data types, which are limited by what is supported by ftd. We can invert this last bit and say fastn-function and fastn-runtime define the types supported by the two and ftd slaves on those definitions.

Is it a worthwhile goal to give a data/runtime layer, decoupled from ftd/fastn? Who will use it, what are the use cases? I may want to build my own UI without relying on ftd syntax and fastn package manager etc. For now we are keeping fastn-function and fastn-runtime together. We will see how it goes later.

Path to fastn 0.4

8th May 2023

We released fastn 0.3 around Jan 2023. This release was about creating a syntax we like and want so support for long time. Since Jan we have focused on documentation, bug fixes etc. It's time we start thinking about what would be the next major release. Let's look at some of the things that or missing or broken in 0.3.

What 0.4 Should Be About

In this release we have to focus on one thing, and make that significantly better. And if we already have a candidate for that, our runtime. We are confident about our syntax, but ftd.js is a liability right now. We do not handle dynamic dom construction currently. We do not know how to do lazy loading of elements, so for example we share both mobile and desktop views, but if we had lazy loading we can ship one, and based on need load the other. Further our event handlers and functions are lacking. They can do very little, our grammer for them is not at all there, we do not support if even inside functions.

We have been playing with rendering on terminal and created some design about how to render using GPU. This approach has a very interesting aspect that we do not have the clutch of the JavaScript and DOM rendering engine available to us, so we really worry about how are we constructing the DOM and how are we handling events. This is the core of our 0.4 release.

How do we store data, how do we store data dependencies, and DOM dependencies etc we can revamp in this release. While we will use GPU and terminal rendering, they are only for giving ourselves a simple target, so we are dealing with easy problem statement. The goal of this release is not at all to give entire set of styles supported by us available to our GPU/terminal rendering backends. Outside browser we will only support a tiny subset of styles we support in browser.

There is another thing we are not too proud of right now, the layouting language. We have done some design of how to layout things, but we kind of rely on browser alone for deciding what should this bit of ftd code look like when rendered, "whatever it looks like in the browser right now". We want to have a proper specification of layout and all the layout related style.

To specify layout properly we will implement our our layouting. We can take this on in 0.4, but it is a bit too much work. So we can say we will continue to use 0.3 in Chrome is the specification, and ensure 0.4 meets this specification. And we worry about rendering specification in 0.5 as, one, it can be a lot of work to specify the layout and rendering, and two, it can be backward breaking change.

So the important guideline for 0.4 is it must be transparent, not user visible change should happen.

Page Size

Docusuarus: 1.9MB (total), 603KB (transferred), 11.6k (index page), 48 requests over 10 domains. fastn.com: 2.38MB (total), 294KB (transferred), 98K(index), 40 requests over 2 domains.

So overall we may not be horrible, but in terms of index page size we are quite (9x) bad. We do two domains right now, but this is an oversight, we seem to have hard coded some image urls, it should be all served by one domain soon.

The page size is bad because we have some if conditions that are not optimised.

We use if conditions for reponsiveness, sometimes we change the style directly, eg font size, or responsive length, and the DOM structure is the same for both mobile and desktop. But sometimes we have two different DOM trees for mobile and desktop. The pattern is like this:
-- component page:

-- page-desktop:
if: { ftd.device == "desktop" }

-- page-mobile:
if: { ftd.device == "mobile" }

-- end: page

-- boolean f: foo()

-- boolean foo:

bar()

So we have optimised components for both mobile and desktop, and the outer component calls the two. Since ftd.device is not a static, means ftd.device can change after the page has loaded, for example when we resize the desktop browser. Also since we do not know on server side if the requesting device was mobile or desktop, so we send both, and switch from desktop to mobile on mobile devices after page load.

This sounds alright but is actually a terrible idea if you ask yourself what happens any of the components used inside say page-desktop. Say page-desktop uses a component foo, and the internal definition of foo also has foo-mobile and foo-desktop.

Better Functions

Dynamic DOM

Misc Items

  • Match for or-type
  • module, with validation
  • public private
  • target
  • export
  • text and prose type
  • error handling
  • continuation syntax
  • compact syntax for record
  • current package
  • markup
  • dependency revamp
    • binary version pinning
    • edition support
  • lifecycle events
  • testing support
  • documentation support
  • search (id)
  • comment (inline till end of line, multiline)
  • lazy loading
  • translation (dropdown)

fastn-surface

5th May 2023

Based on the discussion I had with Arpita about JS vs WASM earlier I have started a new crate in fastn repo called fastn-surface. The crate will have one main abstraction, the top level window.
fastn_surface::Window
struct Window {
    // private stuff
}
When the window is constructed it takes window title, and a root fastn_surface::Element. The window works for browser, terminal and native.
enum Element {
    FlexContainer(FlexContainer),
    Text(Text),
    Image(Image),
    Code(Code),
    TextInput(TextInput),
}
Any UI that can be represented using these structures can be drawn/rendered using this crate. To draw/render something we need to decide the backend we are using. We have three backend, native, terminal and browser.
constructing fastn_surface::Window on browser
let root = construct_root();
let title = "hello";

let win = fastn_surface::web::create_root_window(title);
win.run();

.create_root_window() uses the entire document body to show the UI. If you want to update only a single DOM node, say because you have an existing app and you only want to power a single div using fastn, you can use fastn_surface::web::window_in_id(id, root), which does not take the title.

This only works if web feature is enabled. We can similarly use wgpu or terminal features and use fastn_surface::{wgpu,terminal}::create_root_window(); to create in those places. When using terminal we also have access to fastn_surface::terminal::print(root, Option<u16>) to print the UI. If the width is not given we will compute the terminal width.

Hydration

For browsers, the window can be serialised using to_html() method. The HTML should be inserted in the HTML at the right location from server side, and when constructing window back, web::create_root_window() or window_in_id() check if the serialised HTML is already present, if so it attaches event handlers.

Mutating Window

Once the window has been constructed, the elements can be mutated using the window mutation api. The mutation API ensures the surface is precisely aware of the change. The returned windows implement fastn_surface::WindowMutation trait.

For mutating we need to identify the exact node that we want to mutate. We want this operation to be fast. How is the tree stored? One option is we have a top level simple tree.
new_key_type! { struct NodeKey; }

enum NodeKind {
    FlexContainer,
    Text,
    Image,
}

struct Window {
    nodes: SlotMap<NodeKey, u32>,
    node_kinds: SecondaryMap<NodeKind>
    containers: SecondaryMap<FlexContainer>,
    text_nodes = SecondaryMap<Text>,
    root: NodeKey,
    root_taffy: taffy::NodeKey,
    taffy_root: taffy::Taffy,
}

let root: NodeKey = todo!();

let mut containers: SecondaryMap<FlexContainer>  = SecondaryMap::new();

impl Window {
    fn get_next_key() -> u32 {
        todo!()
    }

    fn set_root(&mut self, &f: FlexContainer) -> NodeKey {
        let k = self.nodes.insert(get_next_key());
        self.node_kinds.insert(k, NodeKind::FlexContainer);
        self.root = k;
        k
    }

    fn create_text(&mut self, t: Text) -> NodeKey {
        let k = self.nodes.insert(get_next_key());
        self.node_kinds.insert(k, NodeKind::FlexContainer);
        self.texts.insert(k, NodeKind::Text);
        k
    }

    fn create_container(
        &mut self, c: FlexContainer, children: Vec<NodeKey>
    ) -> NodeKey {
        todo!()
    }

    fn add_child(&mut self, parent: NodeKey, child: NodeKey) {
        match self.containers.get(parent) {
            Some(c) => {
                c.children.push(child);
                self.taffy.add_child(c.taffy, child.taffy);
            }
            None => panic!();
        }
    }
}
let mut dc = data_container(win);

let index = dc.create_module();

// -- boolean $open: true

let open = dc.create_mut_boolean(index, true);
let open_ref = dc.get_ref(open);

dc.update_boolean(open, false);

// -- boolean $foo: $open

let foo = dc.create_alias(open);

// -- boolean hello(a):
// boolean a:
//
// !a

let hello_wat = "
fn hello(args: [&Any]) -> Any {
    let a = args.0.downcast_ref::<bool>.unwrap();
    !a as (dyn Any)
}";

let hello_a = dc.create_boolean_argument();
let hello_return = dc.crate_boolean_return(); // record or enum also supported

let hello = dc.create_function([hello_a], hello_return, hello_wat);

// -- boolean bar = hello(a=$open);

let bar = dc.create_formula(hello, [open_ref]);

// win

let mut win = web::create_root_window();


// -- ftd.text: hello
// color if hello($open): some_fn(red)

let t = win.create_text(Text {
    value: "Hello",
    ..Default::default()
});

let t_color_wat = "

fn t_color(args: [&Any]) -> Any {
    let a = args.0.downcast_ref::<bool>.unwrap();
    if hello(&Vec::new(a as Any)) {
        Some(some_fn(&Vec::new("red" as Any))
    } else {
        None
    } as Any
}";

dc.add_color_dependency(
    [open], t, t_color_wat
)


dc.add_mutation(open, t, |node, value| {
    if value {
        win.set_text_color(node, "red")
    }
})

let c = FlexContainer {
    gap: Option<usize>,
    direction: <>,
    children: Vec<NodeKey>,
    taffy: taffy::NodeKey,
};

c_id = win.set_root(c);

let t = Text {
    value: "hello".to_string(),
    // other text related styles
}

t_id = win.create_text_with_parent(t, c_id);

Events

The window release events. Event handlers can be attached to Element.

Mental Model For UI and Data

While the fastn-surface is designed for UI, it is optimally used if the data model is in sync with UI.

Pyret Playground

Pyret is a language, and they have an online playground, and they use Google Drive to store your pyret files. I find it quite interesting. They could have created their own store but that would have been pain. Or they could have used Github, the traditional store for storing source code, but they Google Drive gives you a bigger target audiance.

They store the refresh tokens, and access Google Drive using the tokens when the document is needed. If we have to take inspiration from them, we can give a playground if fastn can be compiled as a WASM binary. They our playground would only have to worry about how to fetch files from Google Drive and pass to wasm. No other backend stuff required.

JavaScript vs WASM vs ?

[Summary of my chat with Aprita]

In ftd we allow people to create functions. These functions are exectued either on page generation time, or after the page is rendered, called from event handlers.

When we are tagetting web browser, terminal, native UI and embedded systems where our functions have to run.

We have a few ways to run our functions. One is we add JS as an internal dependency to fastn, and compile all fastn functions to JS functions and let JS run it. In browser we already have a browser engine so websites would be relatively fast to load. Browsers are also very good at JIT compiling the JS functions, so they will run pretty fast as well. But for terminal, native, embedded now we will need to ship QuickJS or some such.

Our next option is WASM. We compile fastn functions to WASM and add wasmtime or wasmer as a dependency, and let them optimise our generated code. These will run pretty fast thanks to wasmtime etc. But wasm may be a heavy dependency, especially for embedded systems.

Or we write our "interpreter", or byte code machine like Python etc. To run them in browser we have to either translate our code in both interpreter and in JS or send the interpreter to browser as WASM. If we generate both interpreter and JS, keeping the behaviour of fastn functions same in both may get tricky. This approach has least dependencies, and also what we do right now. But our function library is small, function language is minimal, does not even support ifs, so this will not continue to be easy.

wasm functions

do we allow people to write fastn functions in wasm? We can take a target specific answer here, we can say wasm is a target capability, and not fasnt capability. User then is in most control, they can opt in to go with wasm or not, and if they go with wasm they will get access to functions written in wasm, but their code will now not work in places where wasm is not available.

The best solution seems we have interpreter, JS generator and opt in wasm. The interpreter can also use wasm is supported on that device as a optimisation.

limits on RAM and CPU

If we go with interperter approach, how to do we limit RAM and CPU? Both interpreted code and wasm can implement these limits.

Shared Hosting

We are planning to give shared hosting, so interpreter approach on our shared host will be quite wasteful for us, as interpreter is worse than WASM. But we can do wasm code generation from fastn function to solve this.

Three Execution Modes

The AST generated from fastn has to be 1. directly interpreted for embedded systems which can't run WASM or JS, 2. translated into JS code for browsers and 3. translated in WAT code for shared hosting or maybe even dynamic hosting/regular fastn.

WASM and JS can take us quite far. Terminal and native UI can take WASM dependency. So we do not have to build the interpreter for now and build it only when embedded device support is needed.

If we are generating WASM, we do not even have to generate JS for now, as JS without WASM is an optimisation.

What About Runtime?

Runtime's job is to call functions when event happen, and modify the UI based on events and dependencies etc. If we take WASM as a dependency for compiling event handlers, the runtime can also be written in WASM.

Is WASM < JS for Browsers?

We do not even know that. JS (99+%) is more supported than WASM (95%). But what about wasm size vs JS code size? Are we preferring JS because JS is helpful for the 95%, or because we want the extra 4%? If the motivation is 4%, what would be this figure in 2024? WASM will become more popular with time. So JS backend, unless it delivers significant advantage (size much smaller or speed much better) over WASM is a bet against time.

Videos For FastN

Ajit has been making videos for us. He has created an entire course already: fastn.com/expander/. Now we are trying to ramp up social media marketing and video creation.

We are doing the planning of our upcoming videos in the "planning document": fastn.com/planning/. Any idea we get about possible video we can create, we add it to the planning video. We use some legends there, like R meaing we are ready to create this video, or D which means we have add some feature of fix some bug in fastn before we can create this video, or P which means this video requires some preperation, like creating an app, or a color scheme etc. The legends are optional, the important thing is to dump all the ideas there.

Prioritizing Videos

Once we have a list of possible videos we have to figure out what video we want to create next. The goal of these videos is that videos get viewed so they must be what our end users want. We are creating these videos for people to learn so anything that we feel people need to know the most should be covered by our videos before we start covering niche topics that only few people will need. The prompt to decide is if we are going to go press tomorrow, and we can make only one more video, that video would be viewed by a lot of people, which one should be that video.

Video Effort

One important consideration is video effort, we do not want to create videos that lowers our ouput. We have a lot of low hanging fruits available, we can create dozens or even hundreds of quick video that talks about a small aspect of what we have built, and yet be of help to users.

Video Titles

Ultimately we want people to want to see our videos and video titles play a very important role. The example I give is say there is a new PotX which is the best tea pot for making tea, if you are introducing it you can call it "What is PotX?", but that is a horrible name, as no one knows what is PotX and they have hundreads and thousands of videos to watch on youtube and other places, so it most likely will get ignored. Instead a title like "Best pot for making tea" or something, which is click-batey, will get more views.

We do not want to go crazy in this direction either, we are not creating click batey content, that will lower our reputation among the smart people, and we are catering to people who are very quick to spot salesy tricks and are repelled by them.

So the idea is to not talk about X, but talk about the benefit one gets because of X. The benefit should be in the title. Like there is a video "border-radius in fastn", now if you know what border radius this works. But what if you have never done CSS, what if you are a lay-person, you may deduce it has radius so must be something to do with circle etc, but don't make them do so much work. The border radius is for rounding corners. What people are looking for is how to create a image with rounded corners or button with rounded corners. Jump from radius to rounding is a big jump.

Planning Cum Publishing Document

For every video we want a planning document that should be created as soon as we start working on that video, it should contain the script, the content, code samples etc. Any video shared for feedback should be shared by adding the video in the planning document. If multiple version of the video was shot each video should be available in the planning document. The YouTube URL MUST NEVER BE SHARED. The planning video should be added in the planning section of fastn.com site.

Once the video is done we need a "publishing document" for that video. We never share youtube video URLs anywhere, we share the published document for the video. The document should embed the video, and also contain all the relevant code snippets, links, supplementary material for the video. The document can also contain the content of the video in text form so people who can not listen or do not like videos can get the same content.

The published document should be added to all the relevant sections in fastn site, under the "video couse" sub section. If the video may be helpful for designers include it in the designer section etc. Including a video in multiple section is perfectly okay and even encouraged.

We have a feature in fastn that allows the same document to be published on multiple URLs or sections. We also have feature in fastn that allows you to create multiple views of the same document, so planning view can contain the script, the draft videos etc, and the published view can contain only the information relevant for end users. We should use that feature.

Chat With Contributors

We don't only create training videos for learning fastn, but also short video interviews with people who are building fastn itself. These videos can pick small topics like "why are we statically typed language and not dynamically typed like JavaScript?", or "why did we create types for dark mode handling?" etc. List of such videos are also in the planning document.

Each such video should be 10mins or less. It should select one question or topic which will become the title of the video itself. The material should be semi technical, should go in some technical depth but not too much, ideally we should show 4-10 lines of code, not show whole source code file. It should be conversational, and should be considered entertainment and not education. So we are not trying to be comprehensive, discuss everything, this is also supported that is also supported, we just talk about one thing we are most passionate about.

The tone should be we are very excited to share something with you. It should ideally not require too much planning, we are not writing scripts etc. But some planning in terms of what all are we going to cover in this video, what all questions are going to be asked should be done. Maybe a trial video should be created, and then based on the video, the qeustion should be written down and some attempt on how to answer should be done.

Chat With Learners

Anyone who finishes the course and has built something interesting, we have to do a video interview with them as well. Ajit has a Google Doc document with interview questions, we will move it to the planning area on fastn, and in each interview we should focus on a single question or two. Short is better. People learning others like them are learning fastn is important part, so 10 short videos proving 10 people have learnt is more important than 2 long videos.

Self Critical

The nature of content should include both good and bad things, we do not want the videos to be all "we are so good", we should focus and highlight why we are not good yet, where you will face issues if you use fastn etc. We have to strike a balance, we are doing sales and marketing, but we also want to remain authentic and honest.

How To Think About IME?