Knowledge is only real when shared.
Loading...
Knowledge is only real when shared.
June 9, 2023
A journey that ends with using AI to generate cross-frameworks components.
@rspack/core is a new bundler written in Rust that aims to match the interface and functionality of the popular webpack. Rspack is designed to seamlessly work with any existing webpack loader or plugin, although it's worth noting that some of the plugins may encounter issues since it's still a relatively new project. Nevertheless, given the vastness of the webpack ecosystem, striving for compatibility is a commendable goal. Rspack has been built from the ground up, incorporating many time-tested features that would typically require specific configurations in webpack.
After successfully transitioning webpack-react-pdf to use Rspack with minimal configuration and without any problems, I realized it was time to rewrite my everyday development build tool, which was based on webpack.
While the @rspack/cli already offers a comprehensive build tool out of the box, my tool directly utilized the programmatic webpack interface. Unfortunately, at this stage, Rspack's programmatic interface lacks documentation. To overcome this limitation, I resorted to examining the CLI implementation as my reference. In this blog post, I will document the programmatic interface since the official documentation for it is currently unavailable. For regular usage of the CLI and configuration, you can refer to the well-documented Rspack guide on their official website.
Update Usage has been updated to reflect many of the breaking changes introduced over the past year. Additionally, a workaround for Bun compatibility and TypeScript types has been added.
The above code snippet demonstrates a straightforward way to configure and trigger compilation using the core rspack
method. The configuration format is similar to webpack, but with the addition of the built-in asset loader for handling PNG assets. This versatile loader automatically inlines assets below 8 KB and loads larger assets separately.
In the above example, a more advanced approach is taken, including error handling within the callback method passed to rspack()
, as well as displaying compilation statistics. Rspack includes its own version of the essential html-webpack-plugin, available as html-rspack-plugin, which allows for customizing and outputting HTML files. In most cases, the built-in plugin rspack.HtmlRspackPlugin
will suffice and achieve the same results.
The built-in plugins greatly simplify the configuration process. Over the years, a set of essential plugins has emerged within the webpack ecosystem, and Rspack ships with its own builtin plugins to cover most of these requirements. A useful builtin plugin is rspack.DefinePlugin
, which replaces constants in the code during the build process, making it convenient to exclude development code from the production bundle. Also, rspack.CopyRspackPlugin
offers functionality similar to the copy-webpack-plugin. While there are more plugins available, this demonstrates how they simplify the configuration process while maintaining familiar interfaces for webpack users. Before these plugins were aligned with webpack, they were available as so called builtins
directly in the configuration.
To configure React the builtin SWC transformer that's also based on Rust can be configured for JSX files. It can be configured for the new automatic runtime as well as hot-reloading. Emotion support can also be configured on the loader. Emotion is a popular CSS-in-JS library that requires a transform.
When working with TypeScript, the configuration
can be validated against the RspackOptions
exported from @rspack/core.
The above configuration will configure JSX rendering for Preact when added to the regular configuration. It will rewrite React imports created by the automatic transform as the Preact package offers compatibility with it.
Typically, during development, one would utilize the compiler in watch mode. Thanks to Rspack's exceptional speed and optimized partial recompilation, any code changes are instantly reflected in the browser. Initiating watch mode programmatically is as simple as calling the watch
function on the compiler. This function takes the watchOptions
from webpack as the first argument, and a callback as the second argument. The WatchOptions
are passed to watchpack and rarely require customization. Invoking watch
triggers an initial build, and the callback is called again after every subsequent rebuild.
The widely used webpack-dev-server, which is utilized by Rspack, also runs the compilation in watch mode and automatically sends the updated assets to the server it has set up.
The key elements to configure here are the port
and host
, which inform the server where to serve the assets. When open
is set to true
, the page will automatically open in the browser. The configuration provided to the compiler should always be in development
mode, as indicated by the name "dev server."
Update It is now possible to set the writeToDisk
option to true in the devMiddleware
to also write the generated assets from memory to disk in the /dist
folder.
Update The workaround mentioned for fork-ts-checker-webpack-plugin is no longer necessary as the issue has been resolved in version 0.1.12
of Rspack.
Although Rspack aims to support existing webpack loaders and plugins, I've encountered some notable cases where certain plugins fail due to missing hooks. For example, the fork-ts-checker-webpack-plugin fails because the afterCompile
hook is absent. It's unclear why this hook is missing, as it has not been deprecated by webpack in any way. However, a simple workaround is to patch this missing hook by assigning it to another hook at a similar point in the bundling lifecycle.
By including the above plugin, which redirects the missing afterCompile
hook to the existing afterEmit
hook, TypeScript type checking can successfully work with the aforementioned plugin.
During the early days of webpack, each team would set up their own webpack configuration for each project. However, over time, the community converged on a few commonly used loaders and plugins. For a long time, create-react-app was the go-to framework for creating React apps. Unfortunately, it has somewhat withered away. Coincidentally, next, a React framework focused on server-side rendering, and an alternative called vite (based on esbuild and rollup) have emerged.
Personally, neither create-react-app nor vite encompassed enough features for my needs, so I created my own wrapper around webpack called papua. Since the tool itself was due for a complete rewrite in TypeScript, the idea of significantly improving performance by replacing webpack with Rspack seemed very intriguing. I'm pleased to announce that papua has now been completely rewritten to use Rspack, starting with version 4. The process of switching to Rspack has been easier than expected.
When migrating some of my projects, switching from webpack and babel to Rspack has resulted in a remarkable tenfold decrease in build time. Additionally, Rspack simplifies certain aspects as important globals are already set, and features like JSX, TypeScript, or Emotion work out of the box without requiring additional configuration. However, there are still some compatibility issues with certain webpack plugins. Most of these issues stem from features that haven't been implemented in Rspack and cannot be easily patched, as mentioned in the previous paragraph. The only plugin I currently use with papua that lacks compatibility is workbox-webpack-plugin, which enables PWA support. However, issues for these missing parts have already been reported on GitHub, and fixes are likely on the horizon. A team of developers sponsored by ByteDance is working tirelessly, and weekly feature-packed releases are common. A regression I reported in version 0.1.12
has been fixed and merged within two days.
@builder.io/mitosis is a versatile plugin that enables writing components in a specific syntax, which can then be compiled into corresponding components for various frontend frameworks. It supports a wide range of popular frameworks including React, Vue, Solid, Angular, Svelte, Qwik, React Native, Swift, Stencil, Marko, Preact, Lit, Alpine, WebComponents, Liquid, and even plain HTML. The plugin parses the input, which is written in a syntax similar to React, and transforms it into an intermediate JSON format. These tokens act as the bytecode, akin to Java, and are further transformed to align with the desired frameworks. In a previous post where I introduced an authentication service with customizable forms, I explored the idea of utilizing Mitosis to provide forms for multiple frameworks.
Mitosis offers a concise set of hooks similar to those known in React. These include useState
, useRef
, useStore
, onInit
, onMount
, onUpdate
, onUnMount
, useDynamicTag
, onError
, useMetadata
, useDefaultProps
, and useStyle
. Additionally, the plugin supports the React Context interface. It's worth noting that documentation for most of these hooks is currently unavailable. To experiment with Mitosis and examine the resulting code for each supported framework, you can utilize the interactive Mitosis Fiddle in your browser.
Mitosis is currently in beta, and while the basic concept of transforming a standardized input format into various destination frameworks has been proven to work, it is not yet feature-complete for production use. The functionality used can vary significantly between different frameworks, requiring developers to test components in each framework, similar to testing in different browsers.
However, the question remains: Can this concept work in general? The challenge lies in satisfying the constraints imposed by each destination framework during compilation. For example, SolidJS loses reactivity when props are destructured, preventing Mitosis components from using destructuring even if it works in React and other frameworks. Another example is the style
property, which must be written in a syntax compatible with all frameworks. While { backgroundColor: 'red' }
is common, it won`'t work in SolidJS, requiring the input to be written as { 'background-color': 'red' }
. Adapting these properties during compilation becomes even more complex when dynamic styles are involved. I`'ve encountered challenges when trying to implement dynamic styles, and nesting components has also caused issues. Often, functionality that works in one framework will not work in others, leading to difficulties in achieving consistent behavior across different frameworks.
In summary, the approach taken by Mitosis prompts us to consider the differences among frontend frameworks, highlighting the importance of choosing the appropriate framework for each task. Implementing a tool like Mitosis requires studying how different frameworks work or having prior knowledge of them. Interestingly, the creators of Mitosis, including Miško Hevery, known for Angular, have leveraged this knowledge to build their own framework called Qwik. @builder.io/qwik seems to combine the best practices from existing frameworks and has gained popularity, recently reaching the Release Candidate Milestone. While Qwik emphasizes performance and server-side rendering, my personal evaluation of a frontend framework may prioritize other factors.
Overall, we can once again observe that React Native`'s approach of using one framework, React, and running the same code on different platforms like Android or iOS works well. In contrast, the opposite approach of using a single source and compiling to different frameworks faces greater difficulties due to the inherited constraints. The issue of inheriting constraints is also evident when using React Native to target vastly different platforms, as discussed in a previous post titled Write Once, Run Anywhere?.
The most straightforward approach to creating a cross-framework plugin is to wrap a basic JavaScript version for each framework. This allows changes made to the base version to be automatically applied to all frameworks, requiring only minimal adjustments to the plugin interface. However, this approach does not suit our needs because one of our goals is to enable the plugin to utilize custom UI elements written in the language of the specific framework being used.
Can an AI like ChatGPT be used to convert components from one framework to another? ChatGPT excels at translating text between languages, and I've recently used it to effortlessly convert TypeScript interfaces into JSON schema without any prior knowledge of JSON schema. In fact, Fireship, a highly regarded development YouTuber, has recently praised the use of ChatGPT for translating components between different frameworks in one of his videos.
Following a similar approach, I would leverage the React implementation of the Authentication form for iltio and employ the AI to translate it into various other frameworks.
As the name implies, Large Language Models like ChatGPT excel in language-related tasks. They possess exceptional capabilities in understanding and translating between different languages. In this experiment, I aim to leverage this particular capability. In programming, we refer to programming languages themselves as languages. Since web development, Node.js, and native development all share JavaScript as the common language, there is no need for translation from that perspective. However, each framework, such as React, Vue, or Svelte, has its own syntax and implementation style, which, for the model, is akin to encountering a different language. Fortunately, ChatGPT's dataset includes vast amounts of programming code collected from across the web, so it is well acquainted with the more popular frontend frameworks.
Translation with a language model differs significantly from traditional translation in programming. In programming, we refer to the process of translating between languages as compilation. Compilation involves applying a predefined set of transformations to the input in a language, resulting in a correct version in another language. When implemented correctly, compilation typically works seamlessly, and programmers seldom need to engage with the process. Mitosis, which we discussed earlier, can also be seen as a compiler, converting from one meta-framework to various other frameworks. However, its implementation is still far from complete, so while it holds theoretical promise, it does not yet perform well in practice. Language models possess a statistical understanding derived from the data they were trained on for a particular framework. Although the translation process involving AI is still deterministic, the applied rules are statistically derived during the training process. Consequently, the quality of the results can vary. In general, the more code about a specific framework exists in the training data, the better the results. I obtained reasonably good results for Vue and Svelte, but for SolidJS and especially Qwik, the results were unsatisfactory. This can be attributed to the fact that SolidJS and Qwik are relatively new frameworks and may not have sufficient representation in the training set. It also explains why LLMs tend to yield excellent results initially but may falter when confronted with less accessible knowledge in specific areas.
Due to ChatGPT's output length limitations, we can only prompt it to convert smaller components. Consequently, the first step was to split the initial React implementation into smaller components. However, even with smaller components, ChatGPT imposes strict output length restrictions. To overcome this limitation, I also utilized Forefront Chat, which allows for five prompts every three hours to GPT-4, enabling the conversion of larger components. I am still uncertain whether the results from GPT-4 are significantly superior to those from GPT-3.5. Both models can readily understand a simple prompt for conversion and provide code in another framework.
Once we have the code for the target framework, things start to get interesting. Since we assume no prior knowledge of any frameworks except React, the best approach is to directly place the converted implementation into the appropriate location and attempt to run it. Any errors encountered in the browser or missing functionalities will provide valuable indications of what still needs to be fixed.
The most common architecture used to implement a plugin across different frameworks is the Monorepo, where all the code is placed in the same repository but published as separate packages. In our case, we want all the implementations to reside in the same repository, but we don't need multiple packages. Instead, we can utilize the ESM (ECMAScript Modules) exports
field to provide different implementations without increasing the bundle size for users, even when tree-shaking is disabled (in development mode). A Monorepo setup still makes sense for the overall structure, as each framework requires its own demo setup.
Since browsers do not discriminate between frameworks and these frameworks essentially generate the same markup and behavior, it is possible to share browser-based tests. UI tests written with tools like playwright can be executed for each framework by writing the code only once. This significantly simplifies the task for developers, allowing them to easily track which features are already implemented and working in each framework, and which framework implementations require further attention. Furthermore, it eliminates the tedium of performing the same sequence of steps for each framework.
Another effective approach is to move a significant portion of the authentication flow and server communication code into the core of the plugin. These components are particularly susceptible to frequent changes when the backend interface evolves. By utilizing shared code, there is no need to update each individual plugin, streamlining the maintenance process.
While AI is undeniably a helpful tool, the increase in productivity did not meet my initial expectations. In this particular approach, AI allows for a quick start with the actual implementation. However, since the generated code rarely works flawlessly, a significant amount of time is spent debugging and troubleshooting. Although ChatGPT can assist in this process, one still needs to acquire a considerable understanding of the framework. Debugging code requires a level of proficiency where one could write the code manually. Nevertheless, tasks like translating between frameworks, which can be repetitive and tedious, are effectively shouldered by ChatGPT. Personally, I find the process of learning a framework through practical implementation more enjoyable than solely relying on documentation. During this process, ChatGPT can act as an experienced mentor, readily available to answer any questions that may arise. However, the notion that AI will soon be capable of autonomously writing code without developer oversight and debugging seems quite distant.
I will certainly continue using ChatGPT for development in the future and would recommend it to other developers. Expect modest productivity gains ranging from 10% to 25%, depending on the extent of repetitive tasks involved and experience using AI. It's important to note that an excessive amount of repetitive coding may indicate the need for better abstractions or alternative approaches. One of the challenges I still face is adapting to defaulting to the use of AI instead of writing the code myself. In the future, I plan to explore integrated and code-specific AI tools like GitHub Copilot or any available free alternatives. Furthermore, I aim to take it a step further and leverage AI to complete projects that I couldn't tackle on my own.
The resulting plugin implementations in Vue and Svelte can be found on GitHub, accompanied by usage examples. Additionally, there is concise documentation available for each implementation.
This post was revised with ChatGPT a Large Language Model.