Artem Rudenko
Founder
The experiment
We built the same landing page twice: once with Lovable and once with a small team of one designer and one engineer. Both worked from the same written specification, with the same scope, content and constraints.
It was a practical question: if you take a real product, write a real specification and execute both approaches under comparable conditions, what actually happens? Does one approach win over another?
TL; DR Human team took much more time than Lovable, but Lovable produced a template-level website. What follows is a report of both builds: what worked, what introduced friction and where the cost ultimately lived. We are also sharing all the artifacts — the specs for the website, all prompts and repositories of both code bases.
The setup
Both builds started from the same written specification (check it here
), which included product description, layout requirements, audience definition, competitors, design references and a few other important things. Writing the detailed spec took 2–3 days.
The target was a production-facing landing page for Kinetics, a Rust-based backend hosting service we’ve been building, meant to explain the product, showcase its features and make pricing and getting started instructions easy to find.
The human build was done by a small full-time team, one designer and one engineer. With detailed spec in their hand they were given almost full autonomy in their decisions. Fair to mention that they only were interrupted twice to discuss design direction with stakeholders.
The Lovable website was generated by stakeholders only. The humans team had no clue about Lovable workflow — this is by design of the experiment.
We had to buy Lovable’s Pro plan (€50/month) to enable custom domains. Each plan has a limit of credits to spend on prompting and other things. It’s hard to estimate how much credits we spent to build the website, as there is no usage data available (they don’t disclose how exactly credits are spent
), however we never hit the plan’s limit.
Both outputs were shipped as real sites on separate domains:
Final result from humans team: https://deploykinetics.com
.
Final result from Lovable: https://usekinetics.com
.
What happened with Lovable
The first version resembled one of the major Kinetics competitors, at least in colors. It looked decent, but was very generic, rough, and not responsive at all. So we started prompting slowly to adjust major or minor details, and brought it to the final version in the following 2–3 days.
It’s not a surprise for those who ever used AI that the result of each request depends heavily on how precise the prompt was. In the case of Lovable, the good thing was that it could somehow detect ambiguity and ask clarifying questions before executing on the prompt. For example, when stylistic intent wasn’t straightforward, Lovable proposed multiple directions and asked us to choose between them.
It’s worth mentioning that precision of a prompt was crucial even for very small requests. For example, removing an unreasonably large gap below the hero section of the page took much more prompts than we fairly expected.
Making the page responsive also took a few prompts. It was not hard to make the website responsive, but we had to point it explicitly to Lovable that the mobile version is broken in the first iterations.
Integrating Google calendar book-appointment widget was a bigger pain. It took quite a lot of prompts to explain to Lovable that we need an on-page widget for booking, not a link which redirects to the Google Calendar appointment booking page. Eventually we had to figure out ourselves how to get the widget code, and ask Lovable to integrate it.
It’s worth mentioning again how good and polished UX is in Lovable. The chat interface was convenient and allowed pasting screenshots directly into the conversation, making some visual corrections easier.
When we finished with the website layout and design we decided to generate the product’s logo right in Lovable. Again, it took a few prompts — we had to try different options and brainstorm the ideas with Lovable. It was rich on suggestions of directions and different options of the logo. Eventually we picked the best one of what we saw, but I can’t say we are happy with it. Lovable is clearly not the right tool to create a logo.
The most uncomfortable part of the process were delays. Pauses between sending a prompt and having results rendered could take a few minutes, depending on complexity of requested changes. It made it easy to switch to another task and harder to maintain a continuous flow and focus.
Prompts
Result
What happened with the human team
The human build moved differently. It took a bit less than 4 weeks from start to finish, including design, coding, revisions, and coordination with the Kinetics product team. Most of the questions were resolved in Slack conversations. We only had a couple of short meetings. There were no major blockers or interruptions. Only a couple of times the team requested a design direction review from stakeholders, otherwise they worked autonomously.
Design and coding were straightforward processes. Some back and forth frictions appeared when the website team had to sync terminal output and code examples on the landing page with the actual Rust lib and CLI application. It’s not a surprise though, cross-teams communication is always slower than internal one.
Important to mention that visual polish required deliberate work. The engineer spent several days implementing and refining animations to ensure they felt smooth. These details do not change functionality, but they affect the quality and make the website stand out.
Stages
Result
Artifacts
Both implementations are publicly accessible.
Website spec
Live builds
Source repositories
Quality comparison
Just as a shallow but meaningful comparison we ran Lighthouse on both sites under identical desktop conditions. The human-build showed 2x faster loading times. This is due both to the smaller amount of data transferred (201 KB in the human-build vs 418 KB in the Lovable-build) and to the overall user experience when loading the site (no FOUC
and a smooth transition to the first screen in the human-build). The other metrics show similar results, close to 100 points on the Lighthouse scale.
Human-built version:
Key metrics:
- Performance: 100
- LCP: 0.75s
Lovable-built version:
Key metrics:
- Performance: 98
- LCP: 1.34s
From a code structure perspective, the Lovable output was actually very reasonable (almost as good as a humans’ one) and ejectable. It did not introduce an obvious vendor lock.
Result
Lovable produced a working website with template-level design quickly. The human team took much longer but delivered a higher quality website with more creative design.
The difference is not only the quality of the result, but also the level of supervision.
Lovable produced the website much quicker than the human team, but it required continuous supervision. Small changes sometimes introduced regressions and required multiple passes to stabilize. Basically keeping style and UI consistent was the responsibility of the human who was prompting.
With the human team, supervision was rather occasional and high level, resulting in bigger implications. Once given a direction the team was able to maintain it in a stable and consistent way.