<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Map-Based Localization for Autonomous Driving Workshop | Kudan global</title>
	<atom:link href="https://www.kudan.io/blog/tag/map-based-localization-for-autonomous-driving-workshop/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kudan.io</link>
	<description>Kudan has been providing proprietary Artificial Perception technologies based on SLAM to enable use cases with significant market potential and impact on our lives such as autonomous driving, robotics, AR/VR and smart cities</description>
	<lastBuildDate>Tue, 27 Feb 2024 02:20:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.13</generator>

 
<site xmlns="com-wordpress:feed-additions:1">179852210</site>	<item>
		<title>Understanding Covariance Quality in Robot Localisation</title>
		<link>https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=understanding-covariance-quality-in-robot-localisation</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 27 Feb 2024 02:20:09 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[autonomous mobile industrial robots]]></category>
		<category><![CDATA[Autonomous Mobile Robot]]></category>
		<category><![CDATA[Autonomous Mobile Robots]]></category>
		<category><![CDATA[autonomous mobility]]></category>
		<category><![CDATA[autonomous vehicles]]></category>
		<category><![CDATA[localization]]></category>
		<category><![CDATA[map-based localization]]></category>
		<category><![CDATA[Map-Based Localization for Autonomous Driving Workshop]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1765</guid>

					<description><![CDATA[<p>(Written by Anthony Glynn, Kudan CTO) Consider a robot navigating the bustling aisles of a warehouse, swiftly picking up and delivering items. It must decide how quickly to move and how much space to leave when turning corners to avoid accidents, like clipping a shelf and causing a cascade of items. For this, the robot [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/">Understanding Covariance Quality in Robot Localisation</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>(Written by <a href="https://www.linkedin.com/in/anthony-glynn-952b6653/">Anthony Glynn</a>, Kudan CTO)</p>
<p><img loading="lazy" class="size-large wp-image-1775 aligncenter" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03-1024x455.png?resize=1024%2C455&#038;ssl=1" alt="" width="1024" height="455" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=1024%2C455&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=300%2C133&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=768%2C342&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=1536%2C683&amp;ssl=1 1536w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?w=1808&amp;ssl=1 1808w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></p>
<p>Consider a robot navigating the bustling aisles of a warehouse, swiftly picking up and delivering items. It must decide how quickly to move and how much space to leave when turning corners to avoid accidents, like clipping a shelf and causing a cascade of items. For this, the robot relies on its localisation module which integrates data from its sensors, such as cameras, lidars and wheel odometry, and combines this with a prebuilt map of the environment to pinpoint its precise location. The localisation system must not only output its position but also assess how confident it is in its estimate. This confidence, quantified by something called covariance, is crucial. Accurate location data is essential, but so is the robot&#8217;s certainty about this data. If the robot misjudges its certainty, being either too confident or too cautious, it could lead to reckless behaviour or to an overly hesitant and inefficient system.</p>
<p><strong>Covariance</strong></p>
<p data-renderer-start-pos="949">Rather than relying on a single, precise location estimate, our localisation system instead outputs an entire probability distribution. Covariance, which comes from modelling our estimate as a Gaussian distribution, extends the concept of variance to multiple dimensions. It is represented as a matrix and captures both the notion of how spread out our estimates are, as well as the correlation between the different aspects of the robot’s pose such as the x and y coordinates. A larger covariance indicates a wider spread, signalling greater uncertainty: the robot’s true location could fall within a broader range of values.</p>
<p data-renderer-start-pos="949"><img loading="lazy" class="aligncenter wp-image-1766 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=389%2C389&#038;ssl=1" alt="" width="389" height="389" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?w=389&amp;ssl=1 389w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=150%2C150&amp;ssl=1 150w" sizes="(max-width: 389px) 100vw, 389px" data-recalc-dims="1" /></p>
<p data-renderer-start-pos="949">(Image: two Gaussian distributions each represented by 500 samples and an ellipse depicting the 90% confidence region. The blue distribution has a much smaller covariance than the red distribution, indicating a more certain position estimate.)</p>
<p data-renderer-start-pos="1821">Effective decision making relies heavily on covariance. The system needs to determine if its confidence in its location estimate is sufficient to proceed with its current task, or if it must take corrective action and attempt to reduce its position uncertainty. Path planners can take pose covariance as input, and this allows them to adjust movement speed as well as path safety margins.</p>
<p data-renderer-start-pos="2211">Covariance also plays a vital role when integrating measurements from different sensors or combining pose estimates output from various internal modules, offering a systematic way to appropriately weight this information. Higher confidence data will be given more weight. This ensures that the most reliable information has the greatest influence on the system’s overall pose estimate.</p>
<p data-renderer-start-pos="2598">It is important that the covariance that is output accurately reflects the true level of uncertainty. An overconfident could be dangerous, and a system that is too underconfident might be too inefficient.</p>
<p data-renderer-start-pos="2598"><strong>Overconfidence</strong></p>
<p data-renderer-start-pos="2598">The system is overconfident if it assumes it’s location and map are more accurate than they actually are. The output pose covariance will be smaller than it ought to be, meaning the system is underestimating the probability that its actual location could be further away from where it thinks it is.</p>
<p data-renderer-start-pos="3123">This can lead to underestimating new information. If it believes in its current pose estimate too strongly, it may undervalue new, especially conflicting, data. As a consequence it might resist adapting to new situations. This could even lead it to disregard corrective information, potentially preventing it ultimately from reducing error.</p>
<p data-renderer-start-pos="3465">An overconfident might cause the robot to exhibit risky behaviours such as travelling too quickly, or not leaving enough obstacle clearance. This could potentially result in dangerous situations, such as collisions or the robot getting stuck.</p>
<h4 id="Underconfidence" data-renderer-start-pos="3711"><strong>Underconfidence</strong></h4>
<p data-renderer-start-pos="3728">Conversely, an underconfident system will be excessively cautious regarding the quality of its pose estimate, resulting in an excessively large covariance. This means it is exaggerating the likelihood that its true location is significantly different from its estimated position.</p>
<p data-renderer-start-pos="4009">This would likely result in reduced efficiency, or increased running times as a result from overly cautious behaviours<strong data-renderer-mark="true">. </strong>For example the robot might move at a ridiculously slow pace, or it might repeatedly keep deciding it requires additional data and processing time to confirm already known information.</p>
<h4 id="Understanding-covariance-quality" data-renderer-start-pos="4317"><strong>Understanding covariance quality</strong></h4>
<p data-renderer-start-pos="4351">It is therefore imperative that we are able to analyse and understand the quality of the covariance estimates that the system, or any of its internal modules, produces. A good covariance should accurately model the probability: the “true” pose should be contained inside the estimated covariance’s 90% confidence region 90% of the time. It is realistic to expect some degree of degradation in the covariance quality because the system is nonlinear. This means the true probability distribution, in general, can’t be perfectly modelled as a Gaussian distribution, so the Gaussian representation will necessarily be an approximation.</p>
<p data-renderer-start-pos="4984">To perform this analysis we look at the system’s performance over a large variety of datasets, and compare it to ground-truth. Internally at Kudan we are continuing to explore better ways of measuring and visualising covariance quality, as well as trying to understand which variables have the most significant impact on covariance quality.</p>
<p data-renderer-start-pos="5326">Once a system’s covariance quality is understood, the next step is to use this information to calibrate the uncertainty estimation: adjusting the estimated covariance in order to better represent the true uncertainty.</p>
<p data-renderer-start-pos="5326"><strong>Closing thoughts</strong></p>
<p>The management of uncertainty through covariance is fundamental to the operational success of mobile robots, ensuring both safety and efficiency in dynamic environments such as warehouses. By refining our understanding and calibration of covariance estimates, we continue pushing closer to finding the right balance between avoiding the pitfalls of dangerous overconfidence, and the inefficiencies of undue caution.</p>
<p>&nbsp;</p>
<p><a href="https://www.kudan.io/contact/"><strong>Please contact us for learning further technical information</strong></a></p>
<p>&nbsp;</p><p>The post <a href="https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/">Understanding Covariance Quality in Robot Localisation</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1765</post-id>	</item>
		<item>
		<title>Kudan and Artisense to sponsor the upcoming workshop in October 2022: “Map-Based Localization for Autonomous Driving Workshop (ECCV 2022)”</title>
		<link>https://www.kudan.io/blog/kudan-and-artisense-to-sponsor-the-upcoming-workshop-in-october-2022/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-and-artisense-to-sponsor-the-upcoming-workshop-in-october-2022</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 31 Aug 2022 09:30:53 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[artisense]]></category>
		<category><![CDATA[ECCV 2022]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Map-Based Localization for Autonomous Driving Workshop]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[sponsor]]></category>
		<category><![CDATA[Technical University of Munich]]></category>
		<category><![CDATA[TUM]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1394</guid>

					<description><![CDATA[<p>Kudan Inc. (headquartered in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”), a leading provider of Artificial Perception / SLAM technology across a variety of applications, is pleased to announce that Kudan and Artisense GmbH (Kudan’s group company, hereafter “Artisense”) sponsor the workshop on “Map-Based Localization for Autonomous Driving Workshop (ECCV 2022)” in collaboration with the [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-and-artisense-to-sponsor-the-upcoming-workshop-in-october-2022/">Kudan and Artisense to sponsor the upcoming workshop in October 2022: “Map-Based Localization for Autonomous Driving Workshop (ECCV 2022)”</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (headquartered in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”), a leading provider of Artificial Perception / SLAM technology across a variety of applications, is pleased to announce that Kudan and Artisense GmbH (Kudan’s group company, hereafter “Artisense”) sponsor the workshop on “Map-Based Localization for Autonomous Driving Workshop (ECCV 2022)” in collaboration with the Technical University of Munich (<a href="https://www.tum.de/" target="_blank" rel="noopener">TUM</a>). This is going to take place during ECCV 2022, 23-27 October 2022.</p>
<p><img loading="lazy" class="aligncenter wp-image-1395 size-large" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/08/ECCV-Workshop-Kudan-1024x576.png?resize=1024%2C576&#038;ssl=1" alt="" width="1024" height="576" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/08/ECCV-Workshop-Kudan.png?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/08/ECCV-Workshop-Kudan.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/08/ECCV-Workshop-Kudan.png?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/08/ECCV-Workshop-Kudan.png?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/08/ECCV-Workshop-Kudan.png?w=1920&amp;ssl=1 1920w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></p>
<p>This workshop consists of multiple sessions related to localization and mapping topics in the field of autonomous driving. This includes notable speakers such as:</p>
<ul>
<li>Abhinav Valada (The University of Freiburg)</li>
<li>Andrew Davison (Imperial College London)</li>
<li>Henning Lategahn (atlatec GmbH)</li>
<li>Philipp Krähenbühl (The University of Texas at Austin)</li>
<li>Yuning Chai (Cruise)</li>
</ul>
<p>(From Artisense <a href="https://www.artisense.ai/events/mlad-eccv-2022" target="_blank" rel="noopener">HP</a>)</p>
<p>Additionally, we are pleased to host the 3rd edition of the re-localization challenge for autonomous driving based on the 4Seasons dataset. 4Seasons dataset contains various sensor data for multiple places in literally all the 4 seasons for testing re-localization robustness against seasonal changes of sceneries. This challenge is open to the public for participation.</p>
<p>For more details, visit <a href="https://sites.google.com/view/mlad-eccv2022" target="_blank" rel="noopener">https://sites.google.com/view/mlad-eccv2022</a>.</p>
<p><strong>About Artisense GmbH</strong><br />
Artisense is a computer vision and sensor fusion software company that develops an integrated localization and mapping platform using cameras as a lead sensor for the automation of robots, vehicles and spatial intelligence applications. Artisense was founded in 2016 as a spin-off of the Technical University of Munich (TUM) by Prof. Daniel Cremers and Andrej Kulikov. Following December 2021, Artisense is now a subsidiary of Kudan Inc.<br />
For more information, please refer to Artisense’s website at <a href="https://www.artisense.ai/" target="_blank" rel="noopener">https://www.artisense.ai/</a>.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan (Tokyo Stock Exchange securities code: 4425) is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its own milestone models established for deep tech which provide wide-ranging impact on several major industrial fields.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425<br />
Representative: CEO Daiu Ko</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/kudan-and-artisense-to-sponsor-the-upcoming-workshop-in-october-2022/">Kudan and Artisense to sponsor the upcoming workshop in October 2022: “Map-Based Localization for Autonomous Driving Workshop (ECCV 2022)”</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1394</post-id>	</item>
	</channel>
</rss>
