
ADVANCED ARTICLES FOR CUTTING-EDGE ROBOTICS & AV ENGINEERS

In this interview, EarthSense Lead Computer Vision Engineer Michael McGuire teaches us the core algorithms behind their autonomous agriculture robots
Perciv AI is building Deep Learning for RADAR algorithms. We could call this 4D/3D Deep Learning. I have recently visited their HQ, and in this post, I'm revealing what I learned...
The gates closed on February 9, 2026. Next-opening planned Mid-2026. Make sure to join the waitlist to get notified. Dear Friend, If you're trying to break into self-driving cars and autonomous robots, and keep failing interviews, being reminded that you're not there yet, then this page will show you how to close that gap. Here's why breaking into Perception feels impossible right now: The inform…
The LiDAR industry is changing. The 100k$ mechanical LiDAR is gone; and we currently see incredible a solid-state LiDAR mass-produced for 1,000$ or less. How do these new-gen LiDARs work?
How do you make end-to-end deep learning algorithms certified in production? When you have no way to grade each block individually? Jonathan Péclat from Loxo explains that to us.
It's perfectly acceptable to be so-so in C++ for some companies and some projects... And it's a good strategy to prioritize AI skills, learn cutting-edge elements of it; have fun, go beyond just 2D Object Detection, but dive in 3D Tracking... go beyond single-task learning, but embrace HydraNets, etc... But some other skills can't go, and I think ROS (Robotic OS) is one of them. Spend a minute lo…
Don't Just "Save For Later" the Self-Driving Car tools you see online. Learn to use them today. In this video, I'll show you the Self-Driving Car Engineer Stack and how to use it in 15 minutes or less Access a series of 5 videos from my paid courses to understand how to move from beginner to intermediate Computer Vision Engineer. Topics tackled involve Stereo, BEV, Fusion, NeRFs, and more...
We created a mini webinar on team training, showing enterprises: Copyright 2026 ®Think Autonomous
Many engineers are stuck in the late-beginner stage. This is the stage where you know a lot of image processing, can even work with Deep Learning, and has been through all the fundamentals, and face a sudden "I'm not sure what to do from here to be expert". Maybe you've gotten started already, built your first CNN with Keras, trained backpropagation, ran your first YOLO object detectors, and ran …
STEP 1 Every time you have a goal, there should be 2 things: the strategy and the tactics. While most of my courses will teach you the tactics, the algorithms, knowledge, and technical know-how... This course will give you a clear strategy to start or grow in self-driving cars. STEP 2 The Edgeneer's Land is our flagship membership where you continuously get access to a community and to industry c…
There are two types of engineers: those who swear only by computer vision & CNNs, only do OpenCV projects, and pray to find a way to apply this on autonomous vehicle companies... and those who learn LiDARS. I have to tell you: LiDAR Engineers are built different. I believe for an autonomous tech engineer, very few skills can get you as far ahead as knowing LiDARS & Point Clouds. From this single …
Autonomous Driving is evolving faster than you can catchup. One year, the revolution is Chat-GPT, the next one, everybody's already taken the idea into Foundation Models and End-To-End Learning. If you'd like to stay up to date, there are 4 or 5 core ideas to understand: TRANSFORMERS Still lagging behind on Transformers? This is because most examples out there are designed with language examples …
For the value they give, you may ask me how much I'm selling them for. And yet, these are all free! This page contains many of the resources I've built in the past year, in several domains from Computer Vision & Deep Learning to Robotics Don't Just "Save For Later" the Self-Driving Car tools you see online. Learn to use them today. In this video, I'll show you the Self-Driving Car Engineer Stack …
One of the biggest misunderstanding among students and outsider robotic learners is that Object Detection = Self-Driving Cars. Somehow, this misconception keeps surviving, and many spend weeks learning all the details about the latest version of YOLO. In reality, self-driving cars are NOT using object detectors like YOLO as their main perception pipeline. It can be a tool, that is later fused wit…
Since the beginning of the self-driving car era, many people wanted to compare LiDAR vs RADAR. It didn't make sense: these sensors were complementary back then. Today, at the age of 4D, the LiDAR vs RADAR comparison makes real sense, let's see...
Autoware is transitioning to End-To-End Learning. When? And How exactly will this happen? This is what we'll find out this month, in this exclusive interview with Samet Kukut.
Let's reveal it all: What are point clouds? What are 3 Ways to create them? How to process them? How do we detect 3D objects inside a point cloud?
Tesla vs Waymo: Is this worth making another comparison? Well, I think they are not really comparable, yes, one of them has a better map to Level 5, and if you'd like my expert opinion on who, I invite you to read!
Self-driving cars collect Tb of videos every day... but is that really needed? (spoiler: No) In this article, you'll discover how to collect data in the AV 2.0 age; from Tesla's Trigger Classifiers, to Heex Event Management solutions, learn the different ways to do automotive data processing.
research.ioSign up to keep scrolling
Create your feed subscriptions, save articles, keep scrolling.