Connect with us


Nextbase Dash 320XR Dash Cam Review: More Style Than Substance



At a glance

Expert’s Rating


Rich colors in day videoFront and rear coverageMagnetic mounting for both front and back cameras


Poor detail in day and night capturesNo GPS

Our Verdict

The dual-channel, front/rear 320XR brings familiar Nextbase amenities and style to the table, but the 1080p captures lack necessary detail

Nextbase generally makes superior dash cams. The company’s familiar classy style and usability is present in the $150 dual-channel 320XR. Sadly, it lacks what we’ve come to expect from Nextbase in performance. Despite both the front and rear 140-degree field-of-view cameras capturing at 1080p/30 frames per second, there was a noticeable lack of detail in the resulting images—to the point where we could barely make out a license plate number at mid-day.

It’s a shame, as I love the colorful 2.5-inch display, and I especially love that both cameras couple magnetically to their semi-permanent mounts. Magnetic coupling means you can just pop off the cams and put them in your backpack—something I always do here in the city when I park my convertible on the street. Leave anything in a rag-top and it’s a short slash of the fabric to being stolen.

Below is a side view of the 320XR showing its SD card slot (the power button is a mistake in this rendering; it’s actually on the back). On top is the mini-USB connection for the auxiliary power adapter and on the right side (not shown) is the Type-C USB port for the rear camera. Why the USB mismatch, I can’t say, though I’d guess it has to do with parts on hand.

This review is part of our ongoing roundup of the best dash cams. Go there for more reviews and buying advice. 

As you can see below, the main camera features six different buttons alongside the display. On the left are the power, menu, and mode buttons. The latter switches you between normal video capture mode, photo capture mode, and playback mode. On the right are the up, enter, and down buttons which are used for navigating the menus. The up button also toggles audio capture, the enter button stops/starts video recording, and the down button takes snapshots.

The 320XR has a 280mAh battery that will run the camera for about 15 minutes after power is removed. If the 12-volt is interrupted in an accident, it can be useful to capture subsequent events. There’s also a parking mode for surveillance while you’re absent from the vehicle.

That’s about it for features. You won’t find GPS (which can be useful for travelogues and confirming locations) nor bad driver tech such as lane departure warning. In truth, we generally find that a couple more driving lessons will more than compensate for that oversight.


While I was okay with the 320XR’s captures after my first daylight run, it was largely because of the rich color. When I hit the street at night, reality set in. After-dark captures weren’t detailed enough to pick up license plate numbers just 15 feet away—with or without my headlights on. That led me to scrutinize the daylight captures, which also proved unsatisfactory. The same license plate numbers are barely decipherable. (Note that you can right-click and open the images below in a new tab to see them in greater resolution.)

As you can see next, the 320XR with my lights off (below), barely captures the existence of the license plate, let alone allows you to decipher the license plate numbers.

In the second capture with the headlights on, you still can’t see the numbers clearly. In fact, the reflective surface blows out the darker numbers. This might be an issue if you’re in an accident with another vehicle.

Brightening the captures in post-production doesn’t enhance detail as it does with many cameras either.

The rear night capture is the same as the front (there is distortion about two-thirds up from the heater wire and weather—ignore those), but there was a little more light in this setting. Again, brightening the image didn’t enhance detail—it’s still difficult to make out numbers.

Conditions here in San Francisco make it difficult to keep my convertible’s rear window clean when it’s foggy. The rear captures at night are better than this might make it appear, but still not what they should be. We’ve seen even 720p rear cameras do a better job of capturing license plate numbers.

It’s surprising and not a little frustrating how little detail Nextbase managed to pull out of the 320XR’s 1080p video. The daytime captures are attractive because of their rich color and blending, but what you want are details. Those aren’t nearly as legible as they should or could be.

Stylish and easy, but not up to snuff

Nobody was more surprised at the 320XR’s weak capture performance than I. Until now, everything that Nextbase has sent our way was top-notch. In fact, the 422GW was one of our Christmas buying recommendations.

Nextbase 422GW modular dual-channel dash cam

Best Prices Today:

$239.99 at Amazon

While the 320XR’s style, display, and magnetic mounting are great, I can’t recommend it because it doesn’t deliver the detail that might make the difference in court. As they say, the devil’s in the details.

Original Source:

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Video Friday: an Agile Year



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2022: 23–27 May 2022, Philadelphia
ERF 2022: 28–30 June 2022, Rotterdam, Germany
CLAWAR 2022: 12–14 September 2022, Açores, Portugal

Let us know if you have suggestions for next week, and enjoy today’s videos.

Agility had a busy 2021. This is a long video, but there’s new stuff in it (or, new to me, anyway), including impressive manipulation skills, robust perceptive locomotion, jumping, and some fun constumes.

[ Agility Robotics ]

Houston Mechatronics is now Nauticus Robotics, and they have a fancy new video to prove it.

[ Nauticus ]

Club_KUKA is an unprecedented KUKA show cell that combines entertainment and robotics with technical precision and artistic value. All in all, the show cell is home to a cool group called the Kjays. A KR3 AGILUS at the drums, loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR iiwa, which – mounted on the ceiling – keeps an eye on the unusual robot party.

And if that was too much for you to handle (?), here’s “chill mode:”

[ Kuka ]

The most amazing venue for the 2022 Winter Olympics is the canteen.

[ SCMP ]

A mini documentary thing on ANYbotics from Kaspersky, the highlight of which is probably a young girl meeting ANYmal on the street and asking the important questions, like whether it comes in any other colors.

[ ANYbotics ]

If you’re looking for a robot that can carry out maintenance tasks, our teleoperation systems can give you just that. Think of it as remote hands, able to perform tasks, without you having to be there on-location. You’re still in full control, as the robot hands will replicate your hand movements. You can control the robot from anywhere you like, even from home, which is a much safer and environmentally-friendly approach.

[ Shadow Robot ]

If I had fingers like this, I’d be pretty awesome at manipulating cubes too.

[ Yale ]

The open-source, artificially intelligent prosthetic leg designed by researchers at the University of Michigan will be brought to the research market by Humotech, a Pittsburgh-based assistive technology company. The goal of the collaboration is to speed the development of control software for robotic prosthetic legs, which have the potential to provide the power and natural gait of a human leg to prosthetic users.

[ Michigan Robotics ]

This video is worth watching entirely for the shoulder-dislocating high-five.

[ Paper ]

Of everything in this SoftBank Robotics 2021 rewind, my favorite highlight is the giant rubber duck avoidance.

[ SoftBank ]

On this episode of the Robot Brains Podcast, Pieter talks with David Rolnick about how machine learning can be applied to climate change.

[ Robot Brains ]

A talk from Stanford’s Mark Cutkosky on “Selectively Soft Robotics: Integration Smart Materials in Soft Robotics.”

[ BDML ]

This is a very long video from Yaskawa which goes over many (if not most or all) of the ways that its 500,000 industrial arms are currently being used. It’s well labeled, so I recommend just skipping around to the interesting parts, like cow milking.

[ Yaskawa ]


Continue Reading


20K WordPress Sites Exposed by Insecure Plugin REST-API



The WordPress WP HTML Mail plugin for personalized emails is vulnerable to code injection and phishing due to XSS.

Original Source:

Continue Reading


Legged Robots Learn to Hike Harsh Terrain



Robots, like humans, generally use two different sensory modalities when interacting with the world. There’s exteroceptive perception (or exteroception), which comes from external sensing systems like lidar, cameras, and eyeballs. And then there’s proprioceptive perception (or proprioception), which is internal sensing, involving things like touch, and force sensing. Generally, we humans use both of these sensing modalities at once to move around, with exteroception helping us plan ahead and proprioception kicking in when things get tricky. You use proprioception in the dark, for example, where movement is still totally possible, you just do it slowly and carefully, relying on balance and feeling your way around.

For legged robots, exteroception is what enables them to do all the cool stuff—with really good external sensing and the time (and compute) to do some awesome motion planning, robots can move dynamically and fast. Legged robots are much less comfortable in the dark, however, or really under any circumstances where the exteroception they need either doesn’t come through (because a sensor is not functional for whatever reason) or just totally sucks because of robot-unfriendly things like reflective surfaces or thick undergrowth or whatever. This is a problem because the real world is frustratingly full of robot-unfriendly things.

The research that the Robotic Systems Lab at ETH Zürich has published in Science Robotics showcases a control system that allows a legged robot to evaluate how reliable the exteroceptive information that it’s getting is. When the data are good, the robot plans ahead and moves quickly. But when the dataset seems to be incomplete, noisy, or misleading, the controller gracefully degrades to proprioceptive locomotion instead. This means that the robot keeps moving—maybe more slowly and carefully, but it keeps moving—and eventually, it’ll get to the point where it can rely on exteroceptive sensing again. It’s a technique that humans and animals use, and now robots can use it too, combining speed and efficiency with safety and reliability to handle almost any kind of challenging terrain.

We got a compelling preview of this technique during the DARPA SubT Final Event last fall, when it was being used by Team CERBERUS’ ANYmal legged robots to help them achieve victory. I’m honestly not sure whether the SubT final course was more or less challenging than some mountain climbing in Switzerland, but the performance in the video below is quite impressive, especially since ANYmal managed to complete the uphill portion of the hike four minutes faster than the suggested time for an average human.

Learning robust perceptive locomotion for quadrupedal robots in the wild

Those clips of ANYmal walking through dense vegetation and deep snow do a great job of illustrating how well the system functions. While the exteroceptive data is showing obstacles all over the place and wildly inaccurate ground height, the robot knows where its feet are, and relies on that proprioceptive data to keep walking forward safely and without falling. Here are some other examples showing common problems with sensor data that ANYmal is able to power through:

Other legged robots do use proprioception for reliable locomotion, but what’s unique here is this seamless combination of speed and robustness, with the controller moving between exteroception and proprioception based on how confident it is about what it’s seeing. And ANYmal’s performance on this hike, as well as during the SubT Final, is ample evidence of how well this approach works.

For more details, we spoke with first author Takahiro Miki, a PhD student in the Robotic Systems Lab at ETH Zürich and first author on the paper.

IEEE Spectrum: The paper’s intro says “until now, legged robots could not match the performance of animals in traversing challenging real-world terrain.” Suggesting that legged robots can now “match the performance of animals” seems very optimistic.hat makes you comfortable with that statement?

Takahiro Miki: Achieving a level of mobility similar to animals is probably the goal for many of us researchers in this area. However, robots are still far behind nature and this paper is only a tiny step in this direction.

Your controller enables robust traversal of “harsh natural terrain.” What does “harsh” mean, and can you describe the kind of terrain that would be in the next level of difficulty beyond “harsh”?

Miki: We aim to send robots to places that are too dangerous or difficult to reach for humans. In this work, by “harsh”, we mean the places that are hard for us not only for robots. For example, steep hiking trails or snow-covered trails that are tricky to traverse. With our approach, the robot traversed steep and wet rocky surfaces, dense vegetation, or rough terrain in underground tunnels or natural caves with loose gravels at human walking speed.

We think the next level would be somewhere which requires precise motion with careful planning such as stepping stones, or some obstacles that require more dynamic motion, such as jumping over a gap.

How much do you think having a human choose the path during the hike helped the robot be successful?

Miki: The intuition of the human operator choosing a feasible path for the robot certainly helped the robot’s success. Even though the robot is robust, it cannot walk over obstacles which are physically impossible, e.g., obstacles bigger than the robot or cliffs. In other scenarios such as during the DARPA SubT Challenge however, a high-level exploration and path planning algorithm guides the robot. This planner is aware of the capabilities of the locomotion controller and uses geometric cues to guide the robot safely. Achieving this for an autonomous hike in a mountainous environment, where a more semantic environment understanding is necessary, is our future work.

What impressed you the most in terms of what the robot was able to handle?

Miki: The snow stairs were the very first experiment we conducted outdoors with the current controller, and I was surprised that the robot could handle the slippery snowy stairs. Also during the hike, the terrain was quite steep and challenging. When I first checked the terrain, I thought it might be too difficult for the robot, but it could just handle all of them. The open stairs were also challenging due to the difficulty of mapping. Because the lidar scan passes through the steps, the robot couldn’t see the stairs properly. But the robot was robust enough to traverse them.

At what point does the robot fall back to proprioceptive locomotion? How does it know if the data its sensors are getting are false or misleading? And how much does proprioceptive locomotion impact performance or capabilities?

Miki: We think the robot detects if the exteroception matches the proprioception through its feet contact or feet positions. If the map is correct, the feet get contact where the map suggests. Then the controller recognizes that the exteroception is correct and makes use of it. Once it experiences that the feet contact doesn’t match with the ground on the map, or the feet go below the map, it recognizes that exteroception is unreliable, and relies more on proprioception. We showed this in this supplementary video experiment:

Supplementary Robustness Evaluation

However, since we trained the neural network in an end-to-end manner, where the student policy just tries to follow the teacher’s action by trying to capture the necessary information in its belief state, we can only guess how it knows. In the initial approach, we were just directly inputting exteroception into the control policy. In this setup, the robot could walk over obstacles and stairs in the lab environment, but once we went outside, it failed due to mapping failures. Therefore, combining with proprioception was critical to achieve robustness.

How much are you constrained by the physical performance of the robot itself? If the robot were stronger or faster, would you be able to take advantage of that?

Miki: When we use reinforcement learning, the policy usually tries to use as much torque and speed as it is allowed to use. Therefore if the robot was stronger or faster, we think we could increase robustness further and overcome more challenging obstacles with faster speed.

What remains challenging, and what are you working on next?

Miki: Currently, we steered the robot manually for most of the experiments (except DARPA SubT Challenge). Adding more levels of autonomy is the next goal. As mentioned above, we want the robot to complete a difficult hike without human operators. Furthermore, there is big room for improvements in the locomotion capability of the robot. For “harsher” terrains, we want the robot to perceive the world in 3D and manifest richer behaviors such as jumping over stepping stones or crawling under overhanging obstacles, which is not possible with current 2.5D elevation map.


Continue Reading