<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>map-based localization | Kudan global</title>
	<atom:link href="https://www.kudan.io/blog/tag/map-based-localization/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kudan.io</link>
	<description>Kudan has been providing proprietary Artificial Perception technologies based on SLAM to enable use cases with significant market potential and impact on our lives such as autonomous driving, robotics, AR/VR and smart cities</description>
	<lastBuildDate>Tue, 27 Feb 2024 02:20:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.13</generator>

 
<site xmlns="com-wordpress:feed-additions:1">179852210</site>	<item>
		<title>Understanding Covariance Quality in Robot Localisation</title>
		<link>https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=understanding-covariance-quality-in-robot-localisation</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 27 Feb 2024 02:20:09 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[autonomous mobile industrial robots]]></category>
		<category><![CDATA[Autonomous Mobile Robot]]></category>
		<category><![CDATA[Autonomous Mobile Robots]]></category>
		<category><![CDATA[autonomous mobility]]></category>
		<category><![CDATA[autonomous vehicles]]></category>
		<category><![CDATA[localization]]></category>
		<category><![CDATA[map-based localization]]></category>
		<category><![CDATA[Map-Based Localization for Autonomous Driving Workshop]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1765</guid>

					<description><![CDATA[<p>(Written by Anthony Glynn, Kudan CTO) Consider a robot navigating the bustling aisles of a warehouse, swiftly picking up and delivering items. It must decide how quickly to move and how much space to leave when turning corners to avoid accidents, like clipping a shelf and causing a cascade of items. For this, the robot [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/">Understanding Covariance Quality in Robot Localisation</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>(Written by <a href="https://www.linkedin.com/in/anthony-glynn-952b6653/">Anthony Glynn</a>, Kudan CTO)</p>
<p><img loading="lazy" class="size-large wp-image-1775 aligncenter" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03-1024x455.png?resize=1024%2C455&#038;ssl=1" alt="" width="1024" height="455" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=1024%2C455&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=300%2C133&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=768%2C342&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=1536%2C683&amp;ssl=1 1536w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?w=1808&amp;ssl=1 1808w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></p>
<p>Consider a robot navigating the bustling aisles of a warehouse, swiftly picking up and delivering items. It must decide how quickly to move and how much space to leave when turning corners to avoid accidents, like clipping a shelf and causing a cascade of items. For this, the robot relies on its localisation module which integrates data from its sensors, such as cameras, lidars and wheel odometry, and combines this with a prebuilt map of the environment to pinpoint its precise location. The localisation system must not only output its position but also assess how confident it is in its estimate. This confidence, quantified by something called covariance, is crucial. Accurate location data is essential, but so is the robot&#8217;s certainty about this data. If the robot misjudges its certainty, being either too confident or too cautious, it could lead to reckless behaviour or to an overly hesitant and inefficient system.</p>
<p><strong>Covariance</strong></p>
<p data-renderer-start-pos="949">Rather than relying on a single, precise location estimate, our localisation system instead outputs an entire probability distribution. Covariance, which comes from modelling our estimate as a Gaussian distribution, extends the concept of variance to multiple dimensions. It is represented as a matrix and captures both the notion of how spread out our estimates are, as well as the correlation between the different aspects of the robot’s pose such as the x and y coordinates. A larger covariance indicates a wider spread, signalling greater uncertainty: the robot’s true location could fall within a broader range of values.</p>
<p data-renderer-start-pos="949"><img loading="lazy" class="aligncenter wp-image-1766 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=389%2C389&#038;ssl=1" alt="" width="389" height="389" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?w=389&amp;ssl=1 389w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=150%2C150&amp;ssl=1 150w" sizes="(max-width: 389px) 100vw, 389px" data-recalc-dims="1" /></p>
<p data-renderer-start-pos="949">(Image: two Gaussian distributions each represented by 500 samples and an ellipse depicting the 90% confidence region. The blue distribution has a much smaller covariance than the red distribution, indicating a more certain position estimate.)</p>
<p data-renderer-start-pos="1821">Effective decision making relies heavily on covariance. The system needs to determine if its confidence in its location estimate is sufficient to proceed with its current task, or if it must take corrective action and attempt to reduce its position uncertainty. Path planners can take pose covariance as input, and this allows them to adjust movement speed as well as path safety margins.</p>
<p data-renderer-start-pos="2211">Covariance also plays a vital role when integrating measurements from different sensors or combining pose estimates output from various internal modules, offering a systematic way to appropriately weight this information. Higher confidence data will be given more weight. This ensures that the most reliable information has the greatest influence on the system’s overall pose estimate.</p>
<p data-renderer-start-pos="2598">It is important that the covariance that is output accurately reflects the true level of uncertainty. An overconfident could be dangerous, and a system that is too underconfident might be too inefficient.</p>
<p data-renderer-start-pos="2598"><strong>Overconfidence</strong></p>
<p data-renderer-start-pos="2598">The system is overconfident if it assumes it’s location and map are more accurate than they actually are. The output pose covariance will be smaller than it ought to be, meaning the system is underestimating the probability that its actual location could be further away from where it thinks it is.</p>
<p data-renderer-start-pos="3123">This can lead to underestimating new information. If it believes in its current pose estimate too strongly, it may undervalue new, especially conflicting, data. As a consequence it might resist adapting to new situations. This could even lead it to disregard corrective information, potentially preventing it ultimately from reducing error.</p>
<p data-renderer-start-pos="3465">An overconfident might cause the robot to exhibit risky behaviours such as travelling too quickly, or not leaving enough obstacle clearance. This could potentially result in dangerous situations, such as collisions or the robot getting stuck.</p>
<h4 id="Underconfidence" data-renderer-start-pos="3711"><strong>Underconfidence</strong></h4>
<p data-renderer-start-pos="3728">Conversely, an underconfident system will be excessively cautious regarding the quality of its pose estimate, resulting in an excessively large covariance. This means it is exaggerating the likelihood that its true location is significantly different from its estimated position.</p>
<p data-renderer-start-pos="4009">This would likely result in reduced efficiency, or increased running times as a result from overly cautious behaviours<strong data-renderer-mark="true">. </strong>For example the robot might move at a ridiculously slow pace, or it might repeatedly keep deciding it requires additional data and processing time to confirm already known information.</p>
<h4 id="Understanding-covariance-quality" data-renderer-start-pos="4317"><strong>Understanding covariance quality</strong></h4>
<p data-renderer-start-pos="4351">It is therefore imperative that we are able to analyse and understand the quality of the covariance estimates that the system, or any of its internal modules, produces. A good covariance should accurately model the probability: the “true” pose should be contained inside the estimated covariance’s 90% confidence region 90% of the time. It is realistic to expect some degree of degradation in the covariance quality because the system is nonlinear. This means the true probability distribution, in general, can’t be perfectly modelled as a Gaussian distribution, so the Gaussian representation will necessarily be an approximation.</p>
<p data-renderer-start-pos="4984">To perform this analysis we look at the system’s performance over a large variety of datasets, and compare it to ground-truth. Internally at Kudan we are continuing to explore better ways of measuring and visualising covariance quality, as well as trying to understand which variables have the most significant impact on covariance quality.</p>
<p data-renderer-start-pos="5326">Once a system’s covariance quality is understood, the next step is to use this information to calibrate the uncertainty estimation: adjusting the estimated covariance in order to better represent the true uncertainty.</p>
<p data-renderer-start-pos="5326"><strong>Closing thoughts</strong></p>
<p>The management of uncertainty through covariance is fundamental to the operational success of mobile robots, ensuring both safety and efficiency in dynamic environments such as warehouses. By refining our understanding and calibration of covariance estimates, we continue pushing closer to finding the right balance between avoiding the pitfalls of dangerous overconfidence, and the inefficiencies of undue caution.</p>
<p>&nbsp;</p>
<p><a href="https://www.kudan.io/contact/"><strong>Please contact us for learning further technical information</strong></a></p>
<p>&nbsp;</p><p>The post <a href="https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/">Understanding Covariance Quality in Robot Localisation</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1765</post-id>	</item>
		<item>
		<title>Kudan to sponsor the upcoming ICCV workshop: &#8220;Map-based Localization for Autonomous Driving&#8221; together with Artisense in October 2021</title>
		<link>https://www.kudan.io/blog/kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 17 Aug 2021 03:02:37 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[artisense]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[ICCV]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[map-based localization]]></category>
		<category><![CDATA[sponsor]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=814</guid>

					<description><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”) is pleased to announce that Kudan and Artisense Corporation (Kudan’s group company, hereafter “Artisense”）sponsor the Workshop on “Map-based Localization for Autonomous Driving” at the International Conference on Computer Vision (ICCV), taking place 11-17 October 2021 to contribute to more advancements in SLAM and localization [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving/">Kudan to sponsor the upcoming ICCV workshop: “Map-based Localization for Autonomous Driving” together with Artisense in October 2021</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”) is pleased to announce that Kudan and Artisense Corporation (Kudan’s group company, hereafter “Artisense”）sponsor the Workshop on “Map-based Localization for Autonomous Driving” at the International Conference on Computer Vision (ICCV), taking place 11-17 October 2021 to contribute to more advancements in SLAM and localization in this area.</p>
<p><img loading="lazy" class="size-full wp-image-815 aligncenter" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?resize=943%2C527&#038;ssl=1" alt="" width="943" height="527" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?w=943&amp;ssl=1 943w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?resize=300%2C168&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?resize=768%2C429&amp;ssl=1 768w" sizes="(max-width: 943px) 100vw, 943px" data-recalc-dims="1" /></p>
<p>Kudan and Artisense sponsored the successful first workshop on “Map-based Localization for Autonomous Driving” (MLAD), which took place at the European Conference on Computer Vision (ECCV) last August 2020.</p>
<p>This coming workshop is the second edition. Despite the progress over the last few years, there still remain numerous questions in the field of map-based localization. Questions include the ability to efficiently and at a low cost generate maps at a significantly large scale, and more importantly, how those maps can be kept up-to-date. We will explore and answer these questions.</p>
<p>Confirmed speakers for this workshop include Wolfram Burgard (University of Freiburg, Toyota Research Institute), Michael Milford (Queensland University of Technology) and Torstens Sattler (Czech Technical University), with several more speakers to be expected.</p>
<p>The workshop will host the relocalization challenge once again based on the “<a href="https://www.4seasons-dataset.com/" target="_blank" rel="noopener noreferrer">4Seasons</a>” dataset, a new multi-weather, all-seasons dataset recorded using Artisense’s <a href="https://www.artisense.ai/vins-2020" target="_blank" rel="noopener noreferrer">Visual Inertial Navigation System (VINS)</a>. This dataset aims to enable research in robust vision-based odometry, as well as map-based localization.</p>
<p>Kudan and Artisense will continue to promote the further development of SLAM and localization technology in the area of autonomous driving, together with leading internal and external experts in this area.</p>
<p>For more details on the workshop and topics covered, please visit <a href="https://sites.google.com/view/mlad-iccv2021" target="_blank" rel="noopener noreferrer">here</a>.<br />
We look forward to great discussion and promising new concepts for map-based relocalization!</p>
<p><strong>About Artisense Corporation</strong><br />
Artisense is a computer vision and sensor fusion software company that develops an integrated positioning platform using cameras as lead sensors for the automation of robots, vehicles, and spatial intelligence applications. On a mission to accelerate the adoption of autonomous robots and machines, Artisense provides products and technology for highly accurate, robust, safe, and low-cost navigation in any space.<br />
For more information, please refer to Artisense’s website at <a href="http://www.artisense.ai/" target="_blank" rel="noopener noreferrer">http://www.artisense.ai/</a>.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan (Tokyo Stock Exchange securities code: 4425) is a Deep Tech research and development company specializing in algorithms to enable artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its milestone models established for Deep Tech, which provide wide-ranging impact on several major industrial fields. For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425<br />
Representative: CEO Daiu Ko</p><p>The post <a href="https://www.kudan.io/blog/kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving/">Kudan to sponsor the upcoming ICCV workshop: “Map-based Localization for Autonomous Driving” together with Artisense in October 2021</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">814</post-id>	</item>
	</channel>
</rss>
