s/Operation 2030/Founder's Vision

This commit is contained in:
Abner Coimbre
2025-04-13 18:22:13 -07:00
parent 114587b25b
commit 232a5f178b
30 changed files with 951 additions and 4 deletions

View File

@@ -0,0 +1,37 @@
+++
title="Proper AI Usage"
mediatype="extra"
conference="seattle-2022"
date="2022-11-16T12:00:00-08:00"
description="How Handmade devs should use AI"
thumbnail=""
[[videos]]
title="Memory Strategies"
embed="https://player.vimeo.com/video/774890907"
service="vimeo"
download_link="https://player.vimeo.com/progressive_redirect/playback/774890907/rendition/720p/file.mp4?loc=external&oauth2_token_id=1777364455&signature=3c443137144bf46966f5607ab00f818dfa5f98e1ac7643e62160f55e0bb27792"
[[speakers]]
name="Abner Coimbre"
bio=""
image=""
+++
[Prev](/founders-vision/appendix/c-is-evil-so-what) | [Contents](/founders-vision) | [Next](/founders-vision/appendix/terminal-click)
Our general stance on AI/LLMs aligns with the article ["Vibe Coding" vs Reality](https://cendyne.dev/posts/2025-03-19-vibe-coding-vs-reality.html). In particular, we tend to agree with the following observations:
> Without expert intervention, the best these tools can do today is produce a somewhat functional mockup, where every future change beyond that risks destroying existing functionality.
Or to put it more bluntly:
> These models are trained on average sloppy code, wrong answers on Stack Overflow, and the junk that ends up on Quora.
Handmade's deep appreciation for low-level knowledge affords us the privilege to resist reliance on AI. Indeed, developers who fully embrace our values can do well for themselves in the age of LLMs without having to use them.
For Handmade devs, the proper use of AI is twofold:
- To stay informed about its progress so we can refine our arguments against its misuse
- To use it sparingly and thoughtfully in areas OUTSIDE our core programming work
Finally, when we do use AI, it has to be through privacy-conscious services like Kagis Assistant or by running models locally. Anything else violates who we are.