If you run a business on the web, then you know how important it is to increase engagement. But maintaining your social channels can easily eat up several hours every month, which is why so many people have turned to Post Planner. And since a lifetime subscription to their Starter Plan is on sale right now for 86 percent off — just $79.99 — there’s never been a better time to get it.
Post Planner is a must-have tool for anyone that manages social media channels. It automates the entire job, so you’ll spend less time finding content and scheduling posts and more time engaging with your audience. And it works. Most businesses, in fact, report a 510 percent increase in engagement once they put Post Planner in place.
Post Planner works with all web browsers and is compatible with most of the top social media platforms. It’s easy to use, you can plan a posting schedule that makes the most sense to you, and you’ll never run out of content. And it’s received a rating of 4.7 out of 5 stars by users on Trustpilot, so you can purchase knowing that it will likely work for you too.
Post Planner Starter Plan: Lifetime Subscription – $79.99
Prices subject to change.
Original Article: pcworld.com
Video Friday: an Agile Year
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ICRA 2022: 23–27 May 2022, Philadelphia
ERF 2022: 28–30 June 2022, Rotterdam, Germany
CLAWAR 2022: 12–14 September 2022, Açores, Portugal
Let us know if you have suggestions for next week, and enjoy today’s videos.
Agility had a busy 2021. This is a long video, but there’s new stuff in it (or, new to me, anyway), including impressive manipulation skills, robust perceptive locomotion, jumping, and some fun constumes.
[ Agility Robotics ]
Houston Mechatronics is now Nauticus Robotics, and they have a fancy new video to prove it.
[ Nauticus ]
Club_KUKA is an unprecedented KUKA show cell that combines entertainment and robotics with technical precision and artistic value. All in all, the show cell is home to a cool group called the Kjays. A KR3 AGILUS at the drums, loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR iiwa, which – mounted on the ceiling – keeps an eye on the unusual robot party.
And if that was too much for you to handle (?), here’s “chill mode:”
[ Kuka ]
The most amazing venue for the 2022 Winter Olympics is the canteen.
[ SCMP ]
A mini documentary thing on ANYbotics from Kaspersky, the highlight of which is probably a young girl meeting ANYmal on the street and asking the important questions, like whether it comes in any other colors.
[ ANYbotics ]
If you’re looking for a robot that can carry out maintenance tasks, our teleoperation systems can give you just that. Think of it as remote hands, able to perform tasks, without you having to be there on-location. You’re still in full control, as the robot hands will replicate your hand movements. You can control the robot from anywhere you like, even from home, which is a much safer and environmentally-friendly approach.
[ Shadow Robot ]
If I had fingers like this, I’d be pretty awesome at manipulating cubes too.
[ Yale ]
The open-source, artificially intelligent prosthetic leg designed by researchers at the University of Michigan will be brought to the research market by Humotech, a Pittsburgh-based assistive technology company. The goal of the collaboration is to speed the development of control software for robotic prosthetic legs, which have the potential to provide the power and natural gait of a human leg to prosthetic users.
This video is worth watching entirely for the shoulder-dislocating high-five.
[ Paper ]
Of everything in this SoftBank Robotics 2021 rewind, my favorite highlight is the giant rubber duck avoidance.
[ SoftBank ]
On this episode of the Robot Brains Podcast, Pieter talks with David Rolnick about how machine learning can be applied to climate change.
[ Robot Brains ]
A talk from Stanford’s Mark Cutkosky on “Selectively Soft Robotics: Integration Smart Materials in Soft Robotics.”
[ BDML ]
This is a very long video from Yaskawa which goes over many (if not most or all) of the ways that its 500,000 industrial arms are currently being used. It’s well labeled, so I recommend just skipping around to the interesting parts, like cow milking.
[ Yaskawa ]
20K WordPress Sites Exposed by Insecure Plugin REST-API
Legged Robots Learn to Hike Harsh Terrain
Robots, like humans, generally use two different sensory modalities when interacting with the world. There’s exteroceptive perception (or exteroception), which comes from external sensing systems like lidar, cameras, and eyeballs. And then there’s proprioceptive perception (or proprioception), which is internal sensing, involving things like touch, and force sensing. Generally, we humans use both of these sensing modalities at once to move around, with exteroception helping us plan ahead and proprioception kicking in when things get tricky. You use proprioception in the dark, for example, where movement is still totally possible, you just do it slowly and carefully, relying on balance and feeling your way around.
For legged robots, exteroception is what enables them to do all the cool stuff—with really good external sensing and the time (and compute) to do some awesome motion planning, robots can move dynamically and fast. Legged robots are much less comfortable in the dark, however, or really under any circumstances where the exteroception they need either doesn’t come through (because a sensor is not functional for whatever reason) or just totally sucks because of robot-unfriendly things like reflective surfaces or thick undergrowth or whatever. This is a problem because the real world is frustratingly full of robot-unfriendly things.
The research that the Robotic Systems Lab at ETH Zürich has published in Science Robotics showcases a control system that allows a legged robot to evaluate how reliable the exteroceptive information that it’s getting is. When the data are good, the robot plans ahead and moves quickly. But when the dataset seems to be incomplete, noisy, or misleading, the controller gracefully degrades to proprioceptive locomotion instead. This means that the robot keeps moving—maybe more slowly and carefully, but it keeps moving—and eventually, it’ll get to the point where it can rely on exteroceptive sensing again. It’s a technique that humans and animals use, and now robots can use it too, combining speed and efficiency with safety and reliability to handle almost any kind of challenging terrain.
We got a compelling preview of this technique during the DARPA SubT Final Event last fall, when it was being used by Team CERBERUS’ ANYmal legged robots to help them achieve victory. I’m honestly not sure whether the SubT final course was more or less challenging than some mountain climbing in Switzerland, but the performance in the video below is quite impressive, especially since ANYmal managed to complete the uphill portion of the hike four minutes faster than the suggested time for an average human.
Learning robust perceptive locomotion for quadrupedal robots in the wild
Those clips of ANYmal walking through dense vegetation and deep snow do a great job of illustrating how well the system functions. While the exteroceptive data is showing obstacles all over the place and wildly inaccurate ground height, the robot knows where its feet are, and relies on that proprioceptive data to keep walking forward safely and without falling. Here are some other examples showing common problems with sensor data that ANYmal is able to power through:
Other legged robots do use proprioception for reliable locomotion, but what’s unique here is this seamless combination of speed and robustness, with the controller moving between exteroception and proprioception based on how confident it is about what it’s seeing. And ANYmal’s performance on this hike, as well as during the SubT Final, is ample evidence of how well this approach works.
For more details, we spoke with first author Takahiro Miki, a PhD student in the Robotic Systems Lab at ETH Zürich and first author on the paper.
IEEE Spectrum: The paper’s intro says “until now, legged robots could not match the performance of animals in traversing challenging real-world terrain.” Suggesting that legged robots can now “match the performance of animals” seems very optimistic.hat makes you comfortable with that statement?
Takahiro Miki: Achieving a level of mobility similar to animals is probably the goal for many of us researchers in this area. However, robots are still far behind nature and this paper is only a tiny step in this direction.
Your controller enables robust traversal of “harsh natural terrain.” What does “harsh” mean, and can you describe the kind of terrain that would be in the next level of difficulty beyond “harsh”?
Miki: We aim to send robots to places that are too dangerous or difficult to reach for humans. In this work, by “harsh”, we mean the places that are hard for us not only for robots. For example, steep hiking trails or snow-covered trails that are tricky to traverse. With our approach, the robot traversed steep and wet rocky surfaces, dense vegetation, or rough terrain in underground tunnels or natural caves with loose gravels at human walking speed.
We think the next level would be somewhere which requires precise motion with careful planning such as stepping stones, or some obstacles that require more dynamic motion, such as jumping over a gap.
How much do you think having a human choose the path during the hike helped the robot be successful?
Miki: The intuition of the human operator choosing a feasible path for the robot certainly helped the robot’s success. Even though the robot is robust, it cannot walk over obstacles which are physically impossible, e.g., obstacles bigger than the robot or cliffs. In other scenarios such as during the DARPA SubT Challenge however, a high-level exploration and path planning algorithm guides the robot. This planner is aware of the capabilities of the locomotion controller and uses geometric cues to guide the robot safely. Achieving this for an autonomous hike in a mountainous environment, where a more semantic environment understanding is necessary, is our future work.
What impressed you the most in terms of what the robot was able to handle?
Miki: The snow stairs were the very first experiment we conducted outdoors with the current controller, and I was surprised that the robot could handle the slippery snowy stairs. Also during the hike, the terrain was quite steep and challenging. When I first checked the terrain, I thought it might be too difficult for the robot, but it could just handle all of them. The open stairs were also challenging due to the difficulty of mapping. Because the lidar scan passes through the steps, the robot couldn’t see the stairs properly. But the robot was robust enough to traverse them.
At what point does the robot fall back to proprioceptive locomotion? How does it know if the data its sensors are getting are false or misleading? And how much does proprioceptive locomotion impact performance or capabilities?
Miki: We think the robot detects if the exteroception matches the proprioception through its feet contact or feet positions. If the map is correct, the feet get contact where the map suggests. Then the controller recognizes that the exteroception is correct and makes use of it. Once it experiences that the feet contact doesn’t match with the ground on the map, or the feet go below the map, it recognizes that exteroception is unreliable, and relies more on proprioception. We showed this in this supplementary video experiment:
Supplementary Robustness Evaluation
However, since we trained the neural network in an end-to-end manner, where the student policy just tries to follow the teacher’s action by trying to capture the necessary information in its belief state, we can only guess how it knows. In the initial approach, we were just directly inputting exteroception into the control policy. In this setup, the robot could walk over obstacles and stairs in the lab environment, but once we went outside, it failed due to mapping failures. Therefore, combining with proprioception was critical to achieve robustness.
How much are you constrained by the physical performance of the robot itself? If the robot were stronger or faster, would you be able to take advantage of that?
Miki: When we use reinforcement learning, the policy usually tries to use as much torque and speed as it is allowed to use. Therefore if the robot was stronger or faster, we think we could increase robustness further and overcome more challenging obstacles with faster speed.
What remains challenging, and what are you working on next?
Miki: Currently, we steered the robot manually for most of the experiments (except DARPA SubT Challenge). Adding more levels of autonomy is the next goal. As mentioned above, we want the robot to complete a difficult hike without human operators. Furthermore, there is big room for improvements in the locomotion capability of the robot. For “harsher” terrains, we want the robot to perceive the world in 3D and manifest richer behaviors such as jumping over stepping stones or crawling under overhanging obstacles, which is not possible with current 2.5D elevation map.
Global1 month ago
C.D.C. Panel Will Discuss Blood Clot Risk Linked to J.&J.’s Vaccine
Medicine1 month ago
In Finland, New Swedish PM Discusses Forestry, Security Policy
Biz2 months ago
What You Need to Know About Online Business in 2022
Biz2 months ago
OnlineBusiness.com Acquires CSEO, a Leading Marketing Company for Small Businesses
Biz2 months ago
Top Domain Sales for Q3 2021
Commerce2 months ago
USDA Invests $633 Million in Climate-Smart and Resilient Infrastructure for People in Rural Communities
Global2 months ago
Omicron Case With a New York Tie Shows How Virus Outpaces Response
Lifestyle2 months ago
OpEd: LIRR Better Than Ever With Infrastructure Upgrades