<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Autonomous Driving | Kudan global</title>
	<atom:link href="https://www.kudan.io/blog/tag/autonomous-driving/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kudan.io</link>
	<description>Kudan has been providing proprietary Artificial Perception technologies based on SLAM to enable use cases with significant market potential and impact on our lives such as autonomous driving, robotics, AR/VR and smart cities</description>
	<lastBuildDate>Fri, 19 Sep 2025 03:20:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.13</generator>

 
<site xmlns="com-wordpress:feed-additions:1">179852210</site>	<item>
		<title>Launching R&#038;D on a Software Development Platform for Construction Robotics ~Building a Common Platform to Enable Seamless Robot Collaboration and Accelerate Digital Transformation in the Construction Industry~</title>
		<link>https://www.kudan.io/blog/launching-rd-on-a-software-development-platform-for-construction-robotics/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=launching-rd-on-a-software-development-platform-for-construction-robotics</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 19 Sep 2025 02:30:38 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[5G communication]]></category>
		<category><![CDATA[AI robotics]]></category>
		<category><![CDATA[Akari]]></category>
		<category><![CDATA[Artificial Perception]]></category>
		<category><![CDATA[Asratec]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[Autonomous navigation]]></category>
		<category><![CDATA[BIM]]></category>
		<category><![CDATA[CIM]]></category>
		<category><![CDATA[construction automation]]></category>
		<category><![CDATA[Construction Industry]]></category>
		<category><![CDATA[construction robotics]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[digital twin]]></category>
		<category><![CDATA[JIZAIE]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[mesh networks]]></category>
		<category><![CDATA[NEDO project]]></category>
		<category><![CDATA[post-5G infrastructure]]></category>
		<category><![CDATA[robot collaboration]]></category>
		<category><![CDATA[robot interoperability]]></category>
		<category><![CDATA[robot management system]]></category>
		<category><![CDATA[robot operational support]]></category>
		<category><![CDATA[robot perception]]></category>
		<category><![CDATA[robot productivity]]></category>
		<category><![CDATA[robot standardization]]></category>
		<category><![CDATA[robot system integration]]></category>
		<category><![CDATA[robotics platform]]></category>
		<category><![CDATA[robotics transformation]]></category>
		<category><![CDATA[RX technology]]></category>
		<category><![CDATA[safety in construction]]></category>
		<category><![CDATA[SENSYN ROBOTICS]]></category>
		<category><![CDATA[skilled labor shortage solution]]></category>
		<category><![CDATA[Software Development Platform]]></category>
		<category><![CDATA[TAKENAKA]]></category>
		<category><![CDATA[virtual simulation]]></category>
		<category><![CDATA[Wi-Fi connectivity]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=2109</guid>

					<description><![CDATA[<p>Kudan Inc. (CEO: Daiu Ko), TAKENAKA CORPORATION (President: Masato Sasaki), JIZAIE Inc. (CEO: Junki Nakagawa), Asratec Corp. (President &#38; CEO: Masato Sakatani), Akari Inc. (CEO: Yuki Noro), and SENSYN ROBOTICS, Inc. (CEO: Takuya Kitamura) have jointly launched research and development (R&#38;D)※1※2 of a software development platform for construction robotics. This R&#38;D initiative will build an open [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/launching-rd-on-a-software-development-platform-for-construction-robotics/">Launching R&D on a Software Development Platform for Construction Robotics ~Building a Common Platform to Enable Seamless Robot Collaboration and Accelerate Digital Transformation in the Construction Industry~</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (CEO: Daiu Ko), TAKENAKA CORPORATION (President: Masato Sasaki), JIZAIE Inc. (CEO: Junki Nakagawa), Asratec Corp. (President &amp; CEO: Masato Sakatani), Akari Inc. (CEO: Yuki Noro), and SENSYN ROBOTICS, Inc. (CEO: Takuya Kitamura) have jointly launched research and development (R&amp;D)※1※2 of a software development platform for construction robotics.</p>
<p>This R&amp;D initiative will build an open development platform enabling diverse robots at construction sites—such as those for material transport, fireproof coating, surveying, and cleaning—to use common functional modules in combination. This will allow robot manufacturers and system integrators to freely add and expand modules, helping to address the shortage of skilled workers and accelerate the adoption of robotics in the construction industry.</p>
<p>※1 This project is being conducted under NEDO’s (New Energy and Industrial Technology Development Organization) “Research and Development Project of the Enhanced Infrastructures for Post-5G Information and Communication Systems: Building a Software Development Platform for Robotics” (commissioned)<br />
<a href="https://www.nedo.go.jp/news/press/AA5_101875.html" target="_blank" rel="noopener">https://www.nedo.go.jp/news/press/AA5_101875.html</a> (Japanese only)</p>
<p>※2 Kudan Selected for NEDO’s Open Call: “Research and Development Project of the Enhanced Infrastructures for Post-5G Information and Communication Systems: Building a Software Development Platform for Robotics”<br />
<a href="https://www.kudan.io/blog/kudan-selected-for-nedos-open-call/" target="_blank" rel="noopener">https://www.kudan.io/blog/kudan-selected-for-nedos-open-call/</a></p>
<h3><strong>Background of the Development</strong></h3>
<p>The construction industry is facing a serious shortage and aging of skilled workers, raising expectations for solutions through Robotics Transformation (RX) technologies. However, at present, each vendor develops its own robots independently, creating challenges such as a lack of interoperability and high development costs.</p>
<p>Building on the expertise accumulated through the activities of the Construction RX Consortium※3 (comprising over 300 member companies as of the end of August 2025), there is an urgent need to establish an open development platform to address these issues.</p>
<p>※3 A private-sector organization established to promote Robotics Transformation (RX)—the use of construction robots, IoT applications, and related technologies—to address critical challenges facing the construction industry, such as a declining workforce, and the need to improve productivity and safety</p>
<h3><strong>Overview of the Development</strong></h3>
<p>Through the following six research and development initiatives, we will build a software development platform for construction robotics.</p>
<p>1. Overall Architecture Design (TAKENAKA)</p>
<ul>
<li>Designing an architecture that integrates all components of a robot, from mechanical hardware to software</li>
<li>Creating an architecture that can be commonly used across robots from different manufacturers</li>
</ul>
<p>2. Software Function Development (Kudan)</p>
<ul>
<li>Developing technologies that enable robots to accurately recognize their position and navigate autonomously, even at constantly changing construction sites</li>
<li>Building systems that allow multiple robots to coordinate and perform tasks efficiently.</li>
</ul>
<p>3. Hardware Function Development (JIZAIE)</p>
<ul>
<li>Developing a standardized mobile unit adaptable to various construction tasks</li>
<li>Designing a structure that enables easy installation and replacement of sensors and control devices</li>
</ul>
<p>4. Communication Infrastructure (Asratec)</p>
<ul>
<li>Establishing a stable communication system combining multiple methods such as 5G, Wi-Fi, and mesh networks</li>
<li>Developing a communication platform that flexibly adapts to environmental changes at construction sites (e.g., varying obstacles).</li>
</ul>
<p>5. Pre-Verification Technology in Virtual Space (Akari)</p>
<ul>
<li>Reproducing actual construction sites with high precision in a computer environment to test robot operations in advance</li>
<li>Creating realistic work environment simulations linked with building design data (BIM/CIM).</li>
</ul>
<p>6. Operational Support and Management Tools (SENSYN ROBOTICS)</p>
<ul>
<li>Developing a management system for centralized monitoring and control of multiple robots</li>
<li>Providing a standardized interface that enables unified operation of robots from different manufacturers</li>
</ul>
<h3><strong>Future Outlook</strong></h3>
<p>This R&amp;D initiative aims to reduce the development and operational costs of robotic systems. In collaboration with the “Digital Robotics System Technology Platform Project,” we will verify the practicality of the platform through demonstrations involving multiple robotic systems.</p>
<p>Looking ahead, the platform established in the construction industry will be extended to other sectors, contributing to strengthening the international competitiveness of Japan’s robotics industry.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan leads the advancement of next-generation solutions such as robotics, autonomous driving, and digital twins through research and development, as well as the provision of spatial perception algorithms that connect the physical and digital worlds. Originating from the United Kingdom, Kudan is a global company that, with innovative artificial perception technology (the “eyes” of machines) at its core. By extending the application of artificial intelligence from the digital space into the physical space, Kudan aims to fundamentally solve social issues and dramatically improve productivity by promoting automation, unmanned operation, and remote accessibility across all industries.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425 (TSE Growth)<br />
Representative: CEO Daiu Ko</p>
<p>■Contact Information<br />
For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener">here</a></p><p>The post <a href="https://www.kudan.io/blog/launching-rd-on-a-software-development-platform-for-construction-robotics/">Launching R&D on a Software Development Platform for Construction Robotics ~Building a Common Platform to Enable Seamless Robot Collaboration and Accelerate Digital Transformation in the Construction Industry~</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2109</post-id>	</item>
		<item>
		<title>Kudan Collaborates with a Leading Manufacturer to Successfully Complete Autonomous Mobility Demonstration Experiment with Accuracy Within 10cm – Addressing the Growing Demand for Automation in the Logistics Sector</title>
		<link>https://www.kudan.io/blog/kudan-collaborates-with-a-leading-manufacturer-to-successfully-complete-autonomous-mobility-demo/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-collaborates-with-a-leading-manufacturer-to-successfully-complete-autonomous-mobility-demo</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 16 Dec 2024 23:30:00 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[3D-Lidar SLAM]]></category>
		<category><![CDATA[AP]]></category>
		<category><![CDATA[Artificial Perception]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[autonomous mobility]]></category>
		<category><![CDATA[digital twin]]></category>
		<category><![CDATA[Global Navigation Satellite Systems]]></category>
		<category><![CDATA[GNSS]]></category>
		<category><![CDATA[High-precision Navigation]]></category>
		<category><![CDATA[localization]]></category>
		<category><![CDATA[logistics]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[Smart Factory]]></category>
		<category><![CDATA[Visual SLAM]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1941</guid>

					<description><![CDATA[<p>Kudan has successfully completed a demonstration experiment on autonomous mobility for specialized transport vehicles used in factory operations, in collaboration with a leading Japanese manufacturing company. In this project, Kudan utilized its proprietary technology, Kudan SLAM, achieving high-precision localization with an accuracy of within 10cm, suitable for commercial applications. In recent years, the aging of [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-collaborates-with-a-leading-manufacturer-to-successfully-complete-autonomous-mobility-demo/">Kudan Collaborates with a Leading Manufacturer to Successfully Complete Autonomous Mobility Demonstration Experiment with Accuracy Within 10cm – Addressing the Growing Demand for Automation in the Logistics Sector</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan has successfully completed a demonstration experiment on autonomous mobility for specialized transport vehicles used in factory operations, in collaboration with a leading Japanese manufacturing company. In this project, Kudan utilized its proprietary technology, Kudan SLAM, achieving high-precision localization with an accuracy of within 10cm, suitable for commercial applications.</p>
<p>In recent years, the aging of drivers and a severe labor shortage have significantly impacted supply chains in developed countries, including Japan. This challenge extends beyond public road transport, affecting intra-factory logistics, and highlighting an urgent need for supply chain optimization. As industries strive to seamlessly connect factory operations with external environments and advance the concept of smart factories, the global demand for autonomous mobility technology is expected to grow substantially.</p>
<p>Amid these circumstances, while the need for efficiency in transport operations is increasing, there remain significant challenges in achieving high-precision autonomous mobility in environments where GNSS (Global Navigation Satellite Systems) is ineffective, such as indoor facilities or hybrid environments spanning both indoor and outdoor areas.</p>
<p>This project also faced similar challenges. While GNSS was sufficient for a certain level of autonomous mobility outdoors, realizing autonomous mobility in indoor or hybrid environments presented technical hurdles.</p>
<p>To address these challenges, Kudan conducted a demonstration experiment using its proprietary artificial perception technologies, Visual SLAM and 3D-Lidar SLAM. The experiment successfully achieved the targeted localization accuracy of within 10cm in indoor environments.</p>
<p>In addition, the following strengths of Kudan&#8217;s SLAM technology were confirmed during this project:</p>
<ol>
<li><strong>High Technical Flexibility</strong><br />
Kudan SLAM can be easily retrofitted to existing vehicles and systems, minimizing the need for new investments while enhancing performance</li>
<li><strong>Adaptability to Changing Environments</strong><br />
Even in dynamic indoor environments, such as factories where conditions change significantly due to inventory fluctuations, Kudan SLAM enables high-precision localization using a single pre-generated 3D map.</li>
<li><strong>Scalability of Technology</strong><br />
Kudan SLAM is not limited to indoor environments but can also seamlessly support autonomous mobility in outdoor or hybrid environments, enabling deployment across diverse industrial sectors.</li>
</ol>
<p>Building on these results, Kudan aims to expand the scale of its demonstrations and conduct further verifications in more complex environments, working toward the transition to full-scale operational deployment. Furthermore, Kudan will continue to support operational efficiency in a wide range of industries by leveraging robotics and digital twin technologies, driving innovation to meet societal needs with cutting-edge solutions.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is licensing its technology for next-generation solution areas such as digital twin, robotics and autonomous driving.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425 (TSE Growth)<br />
Representative: CEO Daiu Ko</p>
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/kudan-collaborates-with-a-leading-manufacturer-to-successfully-complete-autonomous-mobility-demo/">Kudan Collaborates with a Leading Manufacturer to Successfully Complete Autonomous Mobility Demonstration Experiment with Accuracy Within 10cm – Addressing the Growing Demand for Automation in the Logistics Sector</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1941</post-id>	</item>
		<item>
		<title>Understanding Covariance Quality in Robot Localisation</title>
		<link>https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=understanding-covariance-quality-in-robot-localisation</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 27 Feb 2024 02:20:09 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[autonomous mobile industrial robots]]></category>
		<category><![CDATA[Autonomous Mobile Robot]]></category>
		<category><![CDATA[Autonomous Mobile Robots]]></category>
		<category><![CDATA[autonomous mobility]]></category>
		<category><![CDATA[autonomous vehicles]]></category>
		<category><![CDATA[localization]]></category>
		<category><![CDATA[map-based localization]]></category>
		<category><![CDATA[Map-Based Localization for Autonomous Driving Workshop]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1765</guid>

					<description><![CDATA[<p>(Written by Anthony Glynn, Kudan CTO) Consider a robot navigating the bustling aisles of a warehouse, swiftly picking up and delivering items. It must decide how quickly to move and how much space to leave when turning corners to avoid accidents, like clipping a shelf and causing a cascade of items. For this, the robot [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/">Understanding Covariance Quality in Robot Localisation</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>(Written by <a href="https://www.linkedin.com/in/anthony-glynn-952b6653/">Anthony Glynn</a>, Kudan CTO)</p>
<p><img loading="lazy" class="size-large wp-image-1775 aligncenter" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03-1024x455.png?resize=1024%2C455&#038;ssl=1" alt="" width="1024" height="455" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=1024%2C455&amp;ssl=1 1024w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=300%2C133&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=768%2C342&amp;ssl=1 768w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?resize=1536%2C683&amp;ssl=1 1536w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/Screenshot-2024-02-27-at-10.16.03.png?w=1808&amp;ssl=1 1808w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></p>
<p>Consider a robot navigating the bustling aisles of a warehouse, swiftly picking up and delivering items. It must decide how quickly to move and how much space to leave when turning corners to avoid accidents, like clipping a shelf and causing a cascade of items. For this, the robot relies on its localisation module which integrates data from its sensors, such as cameras, lidars and wheel odometry, and combines this with a prebuilt map of the environment to pinpoint its precise location. The localisation system must not only output its position but also assess how confident it is in its estimate. This confidence, quantified by something called covariance, is crucial. Accurate location data is essential, but so is the robot&#8217;s certainty about this data. If the robot misjudges its certainty, being either too confident or too cautious, it could lead to reckless behaviour or to an overly hesitant and inefficient system.</p>
<p><strong>Covariance</strong></p>
<p data-renderer-start-pos="949">Rather than relying on a single, precise location estimate, our localisation system instead outputs an entire probability distribution. Covariance, which comes from modelling our estimate as a Gaussian distribution, extends the concept of variance to multiple dimensions. It is represented as a matrix and captures both the notion of how spread out our estimates are, as well as the correlation between the different aspects of the robot’s pose such as the x and y coordinates. A larger covariance indicates a wider spread, signalling greater uncertainty: the robot’s true location could fall within a broader range of values.</p>
<p data-renderer-start-pos="949"><img loading="lazy" class="aligncenter wp-image-1766 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=389%2C389&#038;ssl=1" alt="" width="389" height="389" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?w=389&amp;ssl=1 389w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2024/02/download-2.png?resize=150%2C150&amp;ssl=1 150w" sizes="(max-width: 389px) 100vw, 389px" data-recalc-dims="1" /></p>
<p data-renderer-start-pos="949">(Image: two Gaussian distributions each represented by 500 samples and an ellipse depicting the 90% confidence region. The blue distribution has a much smaller covariance than the red distribution, indicating a more certain position estimate.)</p>
<p data-renderer-start-pos="1821">Effective decision making relies heavily on covariance. The system needs to determine if its confidence in its location estimate is sufficient to proceed with its current task, or if it must take corrective action and attempt to reduce its position uncertainty. Path planners can take pose covariance as input, and this allows them to adjust movement speed as well as path safety margins.</p>
<p data-renderer-start-pos="2211">Covariance also plays a vital role when integrating measurements from different sensors or combining pose estimates output from various internal modules, offering a systematic way to appropriately weight this information. Higher confidence data will be given more weight. This ensures that the most reliable information has the greatest influence on the system’s overall pose estimate.</p>
<p data-renderer-start-pos="2598">It is important that the covariance that is output accurately reflects the true level of uncertainty. An overconfident could be dangerous, and a system that is too underconfident might be too inefficient.</p>
<p data-renderer-start-pos="2598"><strong>Overconfidence</strong></p>
<p data-renderer-start-pos="2598">The system is overconfident if it assumes it’s location and map are more accurate than they actually are. The output pose covariance will be smaller than it ought to be, meaning the system is underestimating the probability that its actual location could be further away from where it thinks it is.</p>
<p data-renderer-start-pos="3123">This can lead to underestimating new information. If it believes in its current pose estimate too strongly, it may undervalue new, especially conflicting, data. As a consequence it might resist adapting to new situations. This could even lead it to disregard corrective information, potentially preventing it ultimately from reducing error.</p>
<p data-renderer-start-pos="3465">An overconfident might cause the robot to exhibit risky behaviours such as travelling too quickly, or not leaving enough obstacle clearance. This could potentially result in dangerous situations, such as collisions or the robot getting stuck.</p>
<h4 id="Underconfidence" data-renderer-start-pos="3711"><strong>Underconfidence</strong></h4>
<p data-renderer-start-pos="3728">Conversely, an underconfident system will be excessively cautious regarding the quality of its pose estimate, resulting in an excessively large covariance. This means it is exaggerating the likelihood that its true location is significantly different from its estimated position.</p>
<p data-renderer-start-pos="4009">This would likely result in reduced efficiency, or increased running times as a result from overly cautious behaviours<strong data-renderer-mark="true">. </strong>For example the robot might move at a ridiculously slow pace, or it might repeatedly keep deciding it requires additional data and processing time to confirm already known information.</p>
<h4 id="Understanding-covariance-quality" data-renderer-start-pos="4317"><strong>Understanding covariance quality</strong></h4>
<p data-renderer-start-pos="4351">It is therefore imperative that we are able to analyse and understand the quality of the covariance estimates that the system, or any of its internal modules, produces. A good covariance should accurately model the probability: the “true” pose should be contained inside the estimated covariance’s 90% confidence region 90% of the time. It is realistic to expect some degree of degradation in the covariance quality because the system is nonlinear. This means the true probability distribution, in general, can’t be perfectly modelled as a Gaussian distribution, so the Gaussian representation will necessarily be an approximation.</p>
<p data-renderer-start-pos="4984">To perform this analysis we look at the system’s performance over a large variety of datasets, and compare it to ground-truth. Internally at Kudan we are continuing to explore better ways of measuring and visualising covariance quality, as well as trying to understand which variables have the most significant impact on covariance quality.</p>
<p data-renderer-start-pos="5326">Once a system’s covariance quality is understood, the next step is to use this information to calibrate the uncertainty estimation: adjusting the estimated covariance in order to better represent the true uncertainty.</p>
<p data-renderer-start-pos="5326"><strong>Closing thoughts</strong></p>
<p>The management of uncertainty through covariance is fundamental to the operational success of mobile robots, ensuring both safety and efficiency in dynamic environments such as warehouses. By refining our understanding and calibration of covariance estimates, we continue pushing closer to finding the right balance between avoiding the pitfalls of dangerous overconfidence, and the inefficiencies of undue caution.</p>
<p>&nbsp;</p>
<p><a href="https://www.kudan.io/contact/"><strong>Please contact us for learning further technical information</strong></a></p>
<p>&nbsp;</p><p>The post <a href="https://www.kudan.io/blog/understanding-covariance-quality-in-robot-localisation/">Understanding Covariance Quality in Robot Localisation</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1765</post-id>	</item>
		<item>
		<title>China’s Whale Dynamic releases products for autonomous driving by integrating Kudan 3D-Lidar SLAM and won a project in Tier1 City in China</title>
		<link>https://www.kudan.io/blog/whale-dynamic-releases-products-for-autonomous-driving-by-integrating-kudan-3d-lidar-slam/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=whale-dynamic-releases-products-for-autonomous-driving-by-integrating-kudan-3d-lidar-slam</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 11 Jul 2022 07:00:00 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Autonomous]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[Global]]></category>
		<category><![CDATA[Japan]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Kudan 3D-Lidar SLAM]]></category>
		<category><![CDATA[Kudan SLAM]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[Whale Dynamic]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1317</guid>

					<description><![CDATA[<p>Kudan Inc. (headquartered in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”), a leading provider of Artificial Perception / SLAM technology across a variety of applications, is pleased to announce that its business partner Whale Dynamic Co.Ltd. (headquartered in Shenzhen, China; CEO David Yufei Chang, hereafter “Whale Dynamic”) released the products of autonomous delivery vehicles and [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/whale-dynamic-releases-products-for-autonomous-driving-by-integrating-kudan-3d-lidar-slam/">China’s Whale Dynamic releases products for autonomous driving by integrating Kudan 3D-Lidar SLAM and won a project in Tier1 City in China</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (headquartered in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”), a leading provider of Artificial Perception / SLAM technology across a variety of applications, is pleased to announce that its business partner Whale Dynamic Co.Ltd. (headquartered in Shenzhen, China; CEO David Yufei Chang, hereafter “Whale Dynamic”) released the products of autonomous delivery vehicles and associated HD map toolchains by integrating Kudan 3D-Lidar SLAM technology. Two companies have also won a project in Tier1 city in China to deliver the released products, and will further partner to accelerate sales to the global market and expand sales in China.</p>
<p><img loading="lazy" class="aligncenter wp-image-1318 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic1_whale-dynamic.png?resize=945%2C197&#038;ssl=1" alt="" width="945" height="197" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic1_whale-dynamic.png?w=945&amp;ssl=1 945w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic1_whale-dynamic.png?resize=300%2C63&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic1_whale-dynamic.png?resize=768%2C160&amp;ssl=1 768w" sizes="(max-width: 945px) 100vw, 945px" data-recalc-dims="1" /></p>
<p>The market demand for autonomous driving is forecasted to continuously grow in the next ten years. Especially for autonomous delivery, there is a rapidly increasing social need due to continuous urbanization and aging population, the rise of E-commerce, and the shortage of delivery workers in some countries. However, the maturity of available autonomous delivery vehicles/robots in the market is not satisfactory, primarily when operating on urban roads with complex traffic situations.</p>
<p>With this background, Kudan and Whale Dynamic have been in technology collaboration since 2021 to develop market-leading products for autonomous driving and today, Whale Dynamic released the following products, where Kudan’s high-performance 3D-Lidar SLAM technology is used. With proven high accuracy and robustness even in dynamic environments such as urban public roads, Kudan SLAM technology enables accurate HD map creation and precise position understanding of the delivery vehicle during the operation.</p>
<ul>
<li>Mapping hardware kit and software toolchain (Refer to Figure 1) for HD semantic map generation, with the capability to generate dense point cloud and make up into semantic HD map with centimeter-level accuracy. The generated HD map support various formats, and can be applied for various applications in autonomous driving domain.</li>
<li>Multi-Purpose Autonomous Vehicle (MPAV, refer to Figure 2) for autonomous delivery, with the capability to carry out its daily tasks on urban public roads and operate in fully-electric. Due to its sophisticated design, detailed operational scenario design and extensive road testing, MPAV can be applied for various use cases.</li>
</ul>
<div id="attachment_1319" style="width: 955px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1319" loading="lazy" class="wp-image-1319 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic2_whale-dynamic.png?resize=945%2C479&#038;ssl=1" alt="" width="945" height="479" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic2_whale-dynamic.png?w=945&amp;ssl=1 945w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic2_whale-dynamic.png?resize=300%2C152&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic2_whale-dynamic.png?resize=768%2C389&amp;ssl=1 768w" sizes="(max-width: 945px) 100vw, 945px" data-recalc-dims="1" /><p id="caption-attachment-1319" class="wp-caption-text"><em>Figure 1 Mapping HW Kit and SW Toolchain</em></p></div>
<div id="attachment_1320" style="width: 955px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1320" loading="lazy" class="wp-image-1320 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic3_whale-dynamic.png?resize=945%2C375&#038;ssl=1" alt="" width="945" height="375" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic3_whale-dynamic.png?w=945&amp;ssl=1 945w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic3_whale-dynamic.png?resize=300%2C119&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic3_whale-dynamic.png?resize=768%2C305&amp;ssl=1 768w" sizes="(max-width: 945px) 100vw, 945px" data-recalc-dims="1" /><p id="caption-attachment-1320" class="wp-caption-text"><em>Figure 2 Multi-Purpose Autonomous Vehicle &#8211; WD1</em></p></div>
<p>The release also includes Drivable Test Vehicle (DTV, refer to Figure 3), which is developed to let the developers and researchers of autonomous service enterprises or academic institutions to validate autonomous driving technology with better customization flexibility. The vehicle enables both autonomous driving and manual driving in parallel, and ensure quick and practical autonomous driving validation in more reasonable cost.</p>
<div id="attachment_1321" style="width: 955px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1321" loading="lazy" class="wp-image-1321 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic4_whale-dynamic.png?resize=945%2C358&#038;ssl=1" alt="" width="945" height="358" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic4_whale-dynamic.png?w=945&amp;ssl=1 945w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic4_whale-dynamic.png?resize=300%2C114&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/07/pic4_whale-dynamic.png?resize=768%2C291&amp;ssl=1 768w" sizes="(max-width: 945px) 100vw, 945px" data-recalc-dims="1" /><p id="caption-attachment-1321" class="wp-caption-text"><em>Figure 3 Drivable Test Vehicle</em></p></div>
<p>The details of these products can be found below for reference:<br />
<a href="https://www.kudan.io/wp-content/uploads/2022/07/Whale_Dynamic_Product_Reference_3.pdf" target="_blank" rel="noopener">Whale Dynamic Product Reference</a></p>
<p>In conjunction with the product release, the parties have also successfully won a purchase order for an autonomous driving project in China, and are working closely towards project completion.</p>
<p>Two companies will also further strengthen their collaboration and partner in the sales of these products to Chinese, Japanese and other global markets, leveraging each company&#8217;s networks and sales channels to fulfill the enormous needs of autonomous driving and driverless delivery, as well as to promote the resolution the social issues.</p>
<p><strong>Daiu Ko, CEO &amp; of Kudan, commented:</strong> “We are excited to see the product release of autonomous driving and associated mapping tools from Whale Dynamic with adopting Kudan’s 3D-SLAM technology. This is the outcome of close collaboration of both companies, and we look forward to further expanding our partnership in both technology collaboration and business development in the global market.”</p>
<p><strong>David Yufei Chang, CEO and founder of Whale Dynamic, commented:</strong> “The close collaboration with Kudan brings us magnificent possibilities to create a more competitive autonomous driving solution for the world. The ongoing partnership is technological pioneering in the market and promising for future mass volume deployment. Kudan is our valuable partner, and we look forward to further collaborating in technology development and global business expansion.”</p>
<p>We will continue to provide the update of our collaboration and new projects of product delivery in the future.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan (Tokyo Stock Exchange securities code: 4425) is a deep tech research and development company specializing in algorithms for artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its own milestone models established for deep tech which provide wide-ranging impact on several major industrial fields.<br />
For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425<br />
Representative: CEO Daiu Ko</p>
<p><strong>About Whale Dynamic Co.Ltd.</strong><br />
Whale Dynamic is a fast-growing autonomous driving technology company in Shenzhen, China, focused on autonomous driving and intelligent traffic applications. The company holds many fundamental technology patents in vehicle autonomy, from the fields of multi-sensor fusion perception, spatial-temporal synchronization, vehicle embedded system, HD mapping, centimeter level localization and others. Its technology has been widely used by many transportation providers, tier1 companies, universities, and other institutions. Unlike most AD solution companies which only retrofit passenger vehicles for autonomy, Whale Dynamic has used its existing full-stack passenger vehicle autonomy technology in self-developed driverless autonomous vehicles, and secured rich operational use cases by sophisticated design and extensive road testing.<br />
For more information, please refer to Whale Dynamic’s website at <a href="http://www.whaledynamic.com" target="_blank" rel="noopener">http://www.whaledynamic.com</a>.</p>
<p>■Company Details<br />
Name: Whale Dynamic Co.Ltd.<br />
Representative: CEO and Founder David Yufei Chang</p>
<p>■For more details, please contact below</p>
<p><span style="text-decoration: underline;">Kudan Inc.</span><br />
Email: contact2@kudan.eu</p>
<p><span style="text-decoration: underline;">Whale Dynamic Co.Ltd.</span><br />
Email: coop@whaledynamic.com</p><p>The post <a href="https://www.kudan.io/blog/whale-dynamic-releases-products-for-autonomous-driving-by-integrating-kudan-3d-lidar-slam/">China’s Whale Dynamic releases products for autonomous driving by integrating Kudan 3D-Lidar SLAM and won a project in Tier1 City in China</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1317</post-id>	</item>
		<item>
		<title>How to Calibrate a Camera for Visual SLAM (1 of 2)</title>
		<link>https://www.kudan.io/blog/how-to-calibrate-a-camera-for-visual-slam-part1/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-to-calibrate-a-camera-for-visual-slam-part1</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 22 Jun 2022 07:10:19 +0000</pubDate>
				<category><![CDATA[Tech Blog]]></category>
		<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[Autonomous Mobile Robots]]></category>
		<category><![CDATA[Calibration]]></category>
		<category><![CDATA[drones]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[Kudan SLAM]]></category>
		<category><![CDATA[Kudan Visual SLAM]]></category>
		<category><![CDATA[Simultaneous Localization and Mapping]]></category>
		<category><![CDATA[SLAM]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[Visual SLAM]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=1257</guid>

					<description><![CDATA[<p>Visual SLAM is an algorithm for a moving rigid body with a camera that estimates its motion and builds a model of its surrounding environment. Visual SLAM technology is crucial in various use-cases such as autonomous driving, autonomous mobile robots, drones, augmented reality, and virtual reality. Once you decide you&#8217;ll be using Visual SLAM for [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/how-to-calibrate-a-camera-for-visual-slam-part1/">How to Calibrate a Camera for Visual SLAM (1 of 2)</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<div id="attachment_1258" style="width: 893px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1258" loading="lazy" class="wp-image-1258 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-1_blog.png?resize=883%2C418&#038;ssl=1" alt="" width="883" height="418" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-1_blog.png?w=883&amp;ssl=1 883w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-1_blog.png?resize=300%2C142&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-1_blog.png?resize=768%2C364&amp;ssl=1 768w" sizes="(max-width: 883px) 100vw, 883px" data-recalc-dims="1" /><p id="caption-attachment-1258" class="wp-caption-text"><em>Figure 1: Calibration Chess Board</em></p></div>
<p>Visual SLAM is an algorithm for a moving rigid body with a camera that estimates its motion and builds a model of its surrounding environment. Visual SLAM technology is crucial in various use-cases such as autonomous driving, autonomous mobile robots, drones, augmented reality, and virtual reality.</p>
<p>Once you decide you&#8217;ll be using Visual SLAM for a use case, you need to look for specific characteristics in the camera. We&#8217;ll leave the link to an in-depth article we wrote earlier, cracking down on each characteristic at the end.</p>
<p>Next, once you have decided on the camera for your use case, your camera needs to be calibrated. What is calibration — you ask? The process of understanding the camera characteristics is called calibration.<br />
Accurate calibration is of utmost priority to have an outstanding SLAM performance. You may have the best-suited camera and an excellent SLAM algorithm [1]; however, if your calibration is inaccurate, the SLAM performance will deteriorate.</p>
<p>However, it&#8217;s easier said than done, calibration is a complex process, and various steps must be followed to calibrate a camera accurately.</p>
<p>In this two-part article series, first, we will introduce the entire process in a step-by-step fashion and then dive into the exact details, walking you through in-depth information in the next article.</p>
<hr />
<h2><strong>Understanding the camera properties</strong></h2>
<p>To understand our camera calibration process, we must first go over the properties of a camera.</p>
<p>Camera properties can be classified into two classes:</p>
<ul>
<li>Intrinsic parameters: These are properties specific to a camera, such as a camera&#8217;s sensor size, aperture, focal length of its lens, and distortion. These properties are why two cameras at the same position and orientation can produce different images.</li>
<li>Extrinsic parameters: These are properties like position and orientation, which dictate how apart the cameras are and what angle they face each other.</li>
</ul>
<div id="attachment_1259" style="width: 635px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1259" loading="lazy" class="wp-image-1259 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-2_blog.jpg?resize=625%2C464&#038;ssl=1" alt="" width="625" height="464" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-2_blog.jpg?w=625&amp;ssl=1 625w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-2_blog.jpg?resize=300%2C223&amp;ssl=1 300w" sizes="(max-width: 625px) 100vw, 625px" data-recalc-dims="1" /><p id="caption-attachment-1259" class="wp-caption-text"><em>Figure 2: Stereo camera rig</em></p></div>
<p>As you may have realized, intrinsic properties are all that matters for a single-camera setup, and extrinsic properties only become relevant when more than one camera is involved in the Visual SLAM system.</p>
<hr />
<h2><strong>The calibration process: A 4-step approach</strong></h2>
<p>Now that we have a better understanding of the camera properties, we can define camera calibration as the process we follow to determine the intrinsic and extrinsic parameters of the cameras [2].</p>
<p>Not all the parameters can be measured by physically analyzing the camera, especially intrinsic parameters such as distortion. Software libraries are generally used to estimate the parameters, taking the video feed as input.</p>
<p>This is a standard practice in the computer vision industry, and we have left recommendations on software libraries that can be used to calibrate the camera at the end of the article.</p>
<p>So, thanks to software libraries, let us simplify the calibration process into four steps.</p>
<ol>
<li>Show a known object to a camera.</li>
<li>Store the seen properties of the known object in the software.</li>
<li>Move the object in front of the camera.</li>
<li>Let the software calculate the intrinsic and extrinsic parameters by comparing what is seen with what is known.</li>
</ol>
<div id="attachment_1260" style="width: 635px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1260" loading="lazy" class="wp-image-1260 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-3_blog.jpg?resize=625%2C412&#038;ssl=1" alt="" width="625" height="412" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-3_blog.jpg?w=625&amp;ssl=1 625w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-3_blog.jpg?resize=300%2C198&amp;ssl=1 300w" sizes="(max-width: 625px) 100vw, 625px" data-recalc-dims="1" /><p id="caption-attachment-1260" class="wp-caption-text"><em>Figure 3: Calibration process</em></p></div>
<p>The steps above would give you an overall idea of the entire approach. However, we want to provide you with all the details you&#8217;d require when you&#8217;re about to calibrate.</p>
<h3><strong>Preparation for the calibration sequence recording</strong></h3>
<p>An accurate calibration starts before the process itself — during the preparation phase. Let&#8217;s take a step back and understand what steps you need to perform as preparation.</p>
<ul>
<li>Use good lighting conditions for the entire sequence.</li>
<li>Use the same camera settings (resolution, framerate, lens configuration) as in the sequences that are to be analyzed.</li>
<li>Ensure that the calibration pattern is planar. To achieve this, place it on a solid planar surface that does not bend easily (Kudan uses a chess board pattern tightly put on a 1cm-thick plastic board). Ensure that the paper is not folded or scrapped. Additionally, you may flatten the paper or fix it using sticky tape.</li>
<li>The calibration board needs to be as big as possible; A4 will work but isn&#8217;t ideal. Measure the length of squares on the printed paper.</li>
</ul>
<p>Further exact-walkthrough on the steps during the calibration will be covered in the second part of the series.</p>
<hr />
<h2><strong>The parameters to look for post-calibration</strong></h2>
<p>Once the calibration process is finished, we have the following intrinsic parameters:</p>
<ul>
<li>Focal length: fx, fy</li>
<li>Principal point: cx, cy</li>
<li>Radial distortion: k1, k2, k3, k4, k5, k6</li>
<li>Tangential distortion: p1, p2</li>
</ul>
<p>We know that focal length is proportional to both the field of view and the width of the images. Usually, vertical and horizontal values should be similar, and in the stereo case the values between the cameras should also be close. The principal point should be around the center of the image.</p>
<p>Unfortunately, judging distortion parameters by looking at them is impossible because they are factors and ratios in complicated formulas.</p>
<p>We will have additional extrinsic parameters as follows in a stereo setup [2] but we will focus on the single-camera setup for now:</p>
<ul>
<li>Translation vector (3×1) from the right camera to the left camera</li>
<li>Rotation matrix (3×3) to rectify cameras.</li>
</ul>
<hr />
<h2><strong>Decoding a good calibration</strong></h2>
<p>Aside from the ballpark considerations mentioned above, evaluating calibration quality through the numbers is challenging. Two distinct observations can be made on images after undistortion and rectification are applied:</p>
<ul>
<li>Straight lines in the real world should look straight in the view. Straight lines will curve close to the edges if calibration is imprecise in the distortion parameters.</li>
<li>The same visual features should appear in the same scanline between left and right images. A poor stereo calibration will have a vertical disparity in the images between the same physical points.</li>
</ul>
<div id="attachment_1261" style="width: 893px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-1261" loading="lazy" class="wp-image-1261 size-full" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-4_blog.png?resize=883%2C204&#038;ssl=1" alt="" width="883" height="204" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-4_blog.png?w=883&amp;ssl=1 883w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-4_blog.png?resize=300%2C69&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2022/06/Pic-4_blog.png?resize=768%2C177&amp;ssl=1 768w" sizes="(max-width: 883px) 100vw, 883px" data-recalc-dims="1" /><p id="caption-attachment-1261" class="wp-caption-text"><em>Figure 4: Good rectified image (lines are straight and each cell is placed at same height on the left image and the right one)</em></p></div>
<p>Visual inspection can evaluate the conditions above in the captured images. However, measures should be repeated at different points of the images, possibly looking at different distances from the cameras.</p>
<hr />
<h2><strong>Final words</strong></h2>
<p>In this article, we introduced the camera properties and understand the 4-step calibration process using software libraries, how we should prepare for the calibration and some post-calibration considerations.</p>
<p><a href="https://www.kudan.io/archives/1107" target="_blank" rel="noopener">Here</a>, you can read more about choosing the best camera for your Visual SLAM use case.</p>
<p>For free software libraries that can be used for camera calibration, checkout <a href="https://opencv.org/" target="_blank" rel="noopener">OpenCV</a>, <a href="https://data.caltech.edu/records/20164" target="_blank" rel="noopener">Jean-Yves Bouguet&#8217;s Camera Calibration Toolbox for Matlab</a>, and <a href="https://www.dlr.de/rm/en/desktopdefault.aspx/tabid-3925/" target="_blank" rel="noopener">The DLR Camera Calibration Toolbox</a>.</p>
<p>If you&#8217;ve got more questions on your specific use cases, please feel free to reach out to us, and meanwhile, keep an eye on this space for the follow-up article on camera calibration.</p>
<hr />
<h3>References</h3>
<p>[1] Taketomi, Takafumi &amp; Uchiyama, Hideaki &amp; Ikeda, Sei. (2017). Visual SLAM algorithms: a survey from 2010 to 2016. IPSJ Transactions on Computer Vision and Applications. 9. 10.1186/s41074–017–0027–2.[<a href="https://www.researchgate.net/publication/318235730_Visual_SLAM_algorithms_a_survey_from_2010_to_2016/fulltext/595f98d70f7e9b8194ecbeea/Visual-SLAM-algorithms-a-survey-from-2010-to-2016.pdf" target="_blank" rel="noopener">PDF</a>]</p>
<p>[2] Qi, Wang &amp; Li, Fu &amp; Zhenzhong, Liu. (2010). Review on Camera Calibration. 3354–3358. 10.1109/CCDC.2010.5498574. [<a href="https://www.researchgate.net/profile/Wang-Qi-34/publication/224151450_Review_on_Camera_Calibration/links/5b04cd9c4585154aeb07fcb6/Review-on-Camera-Calibration.pdf" target="_blank" rel="noopener">PDF</a>]</p>
<p>[3] G. Carrera, A. Angeli and A. J. Davison, (2011). SLAM-based automatic extrinsic calibration of a multi-camera rig, IEEE International Conference on Robotics and Automation, 2011, pp. 2652–2659, DOI: 10.1109/ICRA.2011.5980294. [<a href="https://ieeexplore.ieee.org/document/5980294" target="_blank" rel="noopener">PDF</a>]</p>
<hr />
<p>■For more details, please contact us from <a href="https://www.kudan.io/contact" target="_blank" rel="noopener noreferrer">here</a>.</p><p>The post <a href="https://www.kudan.io/blog/how-to-calibrate-a-camera-for-visual-slam-part1/">How to Calibrate a Camera for Visual SLAM (1 of 2)</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1257</post-id>	</item>
		<item>
		<title>Kudan to sponsor the upcoming ICCV workshop: &#8220;Map-based Localization for Autonomous Driving&#8221; together with Artisense in October 2021</title>
		<link>https://www.kudan.io/blog/kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 17 Aug 2021 03:02:37 +0000</pubDate>
				<category><![CDATA[Press Release]]></category>
		<category><![CDATA[artisense]]></category>
		<category><![CDATA[Autonomous Driving]]></category>
		<category><![CDATA[ICCV]]></category>
		<category><![CDATA[Kudan]]></category>
		<category><![CDATA[map-based localization]]></category>
		<category><![CDATA[sponsor]]></category>
		<guid isPermaLink="false">https://www.kudan.io/?p=814</guid>

					<description><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”) is pleased to announce that Kudan and Artisense Corporation (Kudan’s group company, hereafter “Artisense”）sponsor the Workshop on “Map-based Localization for Autonomous Driving” at the International Conference on Computer Vision (ICCV), taking place 11-17 October 2021 to contribute to more advancements in SLAM and localization [&#8230;]</p>
<p>The post <a href="https://www.kudan.io/blog/kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving/">Kudan to sponsor the upcoming ICCV workshop: “Map-based Localization for Autonomous Driving” together with Artisense in October 2021</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Kudan Inc. (headquarters in Shibuya-ku, Tokyo; CEO Daiu Ko, hereafter “Kudan”) is pleased to announce that Kudan and Artisense Corporation (Kudan’s group company, hereafter “Artisense”）sponsor the Workshop on “Map-based Localization for Autonomous Driving” at the International Conference on Computer Vision (ICCV), taking place 11-17 October 2021 to contribute to more advancements in SLAM and localization in this area.</p>
<p><img loading="lazy" class="size-full wp-image-815 aligncenter" src="https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?resize=943%2C527&#038;ssl=1" alt="" width="943" height="527" srcset="https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?w=943&amp;ssl=1 943w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?resize=300%2C168&amp;ssl=1 300w, https://i0.wp.com/www.kudan.io/wp-content/uploads/2021/08/pic_ICCV.png?resize=768%2C429&amp;ssl=1 768w" sizes="(max-width: 943px) 100vw, 943px" data-recalc-dims="1" /></p>
<p>Kudan and Artisense sponsored the successful first workshop on “Map-based Localization for Autonomous Driving” (MLAD), which took place at the European Conference on Computer Vision (ECCV) last August 2020.</p>
<p>This coming workshop is the second edition. Despite the progress over the last few years, there still remain numerous questions in the field of map-based localization. Questions include the ability to efficiently and at a low cost generate maps at a significantly large scale, and more importantly, how those maps can be kept up-to-date. We will explore and answer these questions.</p>
<p>Confirmed speakers for this workshop include Wolfram Burgard (University of Freiburg, Toyota Research Institute), Michael Milford (Queensland University of Technology) and Torstens Sattler (Czech Technical University), with several more speakers to be expected.</p>
<p>The workshop will host the relocalization challenge once again based on the “<a href="https://www.4seasons-dataset.com/" target="_blank" rel="noopener noreferrer">4Seasons</a>” dataset, a new multi-weather, all-seasons dataset recorded using Artisense’s <a href="https://www.artisense.ai/vins-2020" target="_blank" rel="noopener noreferrer">Visual Inertial Navigation System (VINS)</a>. This dataset aims to enable research in robust vision-based odometry, as well as map-based localization.</p>
<p>Kudan and Artisense will continue to promote the further development of SLAM and localization technology in the area of autonomous driving, together with leading internal and external experts in this area.</p>
<p>For more details on the workshop and topics covered, please visit <a href="https://sites.google.com/view/mlad-iccv2021" target="_blank" rel="noopener noreferrer">here</a>.<br />
We look forward to great discussion and promising new concepts for map-based relocalization!</p>
<p><strong>About Artisense Corporation</strong><br />
Artisense is a computer vision and sensor fusion software company that develops an integrated positioning platform using cameras as lead sensors for the automation of robots, vehicles, and spatial intelligence applications. On a mission to accelerate the adoption of autonomous robots and machines, Artisense provides products and technology for highly accurate, robust, safe, and low-cost navigation in any space.<br />
For more information, please refer to Artisense’s website at <a href="http://www.artisense.ai/" target="_blank" rel="noopener noreferrer">http://www.artisense.ai/</a>.</p>
<p><strong>About Kudan Inc.</strong><br />
Kudan (Tokyo Stock Exchange securities code: 4425) is a Deep Tech research and development company specializing in algorithms to enable artificial perception (AP). As a complement to artificial intelligence (AI), AP functions allow machines to develop autonomy. Currently, Kudan is using its high-level technical innovation to explore business areas based on its milestone models established for Deep Tech, which provide wide-ranging impact on several major industrial fields. For more information, please refer to Kudan’s website at <a href="https://www.kudan.io/" target="_blank" rel="noopener noreferrer">https://www.kudan.io/</a>.</p>
<p>■Company Details<br />
Name: Kudan Inc.<br />
Securities Code: 4425<br />
Representative: CEO Daiu Ko</p><p>The post <a href="https://www.kudan.io/blog/kudan-sponsored-iccv-workshop-for-map-based-localization-for-autonomous-driving/">Kudan to sponsor the upcoming ICCV workshop: “Map-based Localization for Autonomous Driving” together with Artisense in October 2021</a> first appeared on <a href="https://www.kudan.io">Kudan global</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">814</post-id>	</item>
	</channel>
</rss>
