Implementación de Rutinas en ROS2 para la Aplicación de Algoritmos Multiagentes en el Movimiento de Robots Tipo Soccer Small Size.

dc.contributor.advisorMartinez Vasquez, David Alejandro
dc.contributor.advisorAmaya, Sindy Paola
dc.contributor.advisorMateus Rojas, Armando
dc.contributor.authorCastiblanco Rey, Daniel
dc.contributor.authorVega Otálora, John Felipe
dc.contributor.corporatenameUniversidad Santo Tomásspa
dc.contributor.cvlachttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0001560096spa
dc.contributor.cvlachttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000796425spa
dc.contributor.cvlachttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000680630spa
dc.contributor.googlescholarhttps://scholar.google.com/citations?user=Gg2sofAAAAAJ&hl=es&oi=aospa
dc.contributor.orcidhttps://orcid.org/0000-0001-9750-2653spa
dc.contributor.orcidhttps://orcid.org/0000-0002-1714-1593spa
dc.contributor.orcidhttps://orcid.org/0000-0002-2399-4859spa
dc.coverage.campusCRAI-USTA Bogotáspa
dc.date.accessioned2025-01-16T14:51:54Z
dc.date.available2025-01-16T14:51:54Z
dc.date.issued2024
dc.descriptionEl presente trabajo busca abordar la implementación de rutinas propias de sistemas multiagentes, mediante el sistema operativo de robots en su segunda versión (ROS2) para el equipo STOx’s; ROS2 facilita la implementación y desarrollo de sistemas robóticos. Esta es una propuesta innovadora ante los diferentes desafíos a los cuales se enfrenta la robótica, como lo pueden ser la navegación autónoma y la colaboración eficiente entre robots. Por ello, con el respaldo de ROS2, se busca mejorar la coordinación y el rendimiento del grupo de robots en escenarios de prueba, contribuyendo así al avance de este recurso investigativo. Como aportes de este trabajo se tienen el desarrollo de los comportamientos multiagentes para los robots del equipo STOx’s y un mecanismo de visión artificial que permite la localización de cada robot agente desde una perspectiva de cámara lateral en lugar de la perspectiva superior normalmente utilizada. El equipo STOx´s del grupo de robótica de la Universidad Santo Tomás, desarrolló el grupo de robots para la liga Soccer Small Size League (SSL) con el cual se participó en la competencia de RoboCup durante los años 2011 a 2017 obteniendo muy buenos resultados. Debido al bajo uso del grupo de robots durante un amplio margen de tiempo (desde el 2017 al 2023), se presentan diferentes desafíos a la hora de desarrollar el proyecto. Esto se debe a que varias plataformas robóticas pueden no estar operativas o llegar a requerir mantenimiento. Por lo tanto, se optó por manejar un pequeño número de robots funcionales, los cuales son ideales para afrontar el desarrollo del proyecto debido a su versatilidad y precisión. Esta elección, permitió garantizar la viabilidad del proyecto, aprovechando los recursos disponibles y la validez en la implementación de los algoritmos propuestos. En un principio, se llevó a cabo una revisión bibliográfica para obtener modelos de algoritmos multiagentes, así como herramientas que permitiesen la detección del grupo de robots en tiempo real y de manera estable. Posteriormente se buscó implementar diferentes técnicas para la detección y localización del grupo de robots de manera que fuese posible probar su desplazamiento en un entorno delimitado y realizar pruebas de forma satisfactoria. Por último, se trasladaron a ROS2 las funcionalidades correspondientes al movimiento de los robots, así como la detección y localización de estos, de igual manera, se ajustaron los modelos multiagentes previamente probados en simulación, para la comprobación de su funcionamiento en el entorno dado a través de los robots, realizando así el proceso de validación de la propuesta sobre un escenario físico. Como resultado, se logra que los robots realicen movimientos coordinados con base en los algoritmos multiagentes implementados.spa
dc.description.abstractThe present work aims to address the implementation of routines inherent to multi-agent systems through the Robot Operating System in its second version (ROS2) for the STOx’s team; ROS2 facilitates the implementation and development of robotic systems. This is an innovative proposal in response to the various challenges faced by robotics, such as autonomous naviga- tion and efficient collaboration between robots. Therefore, with the support of ROS2, the aim is to improve the coordination and performance of the group of robots in test scenarios, thus contributing to the advancement of this research resource. Contributions of this work include the development of multi-agent behaviors for the STOx’s team robots and a computer vision mechanism that allows the localization of each robot-agent from a side camera perspective instead of the normally used top-down perspective. The STOx’s team, from the Robotics Group of the Universidad Santo Tomás, developed a group of robots for the Soccer Small Size League (SSL), participating in the RoboCup competition from 2011 to 2017 with very good results. Due to the low usage of the group of robots over a long period of time (from 2017 to 2023), various challenges arise in developing the project. This is because several robotic platforms may be non-operational or may require maintenance. Therefore, it was decided to handle a small number of functional robots, which are ideal for tackling the development of the project due to their versatility and precision. This choice allowed for guaranteeing the project’s feasibility by taking advantage of the available resources and ensuring the validity in the implementation of the proposed algorithms. Initially, a literature review was conducted to obtain models of multi-agent algorithms, as well as tools that would allow real-time and stable detection of the group of robots. Subsequently, different techniques were implemented for the detection and localization of the group of robots,making it possible to test their movement in a delimited environment and to carry out tests satisfactorily. Finally, the functionalities corresponding to the movement of the robots, as well as their detection and localization, were transferred to ROS2. Likewise, the multi-agent models previously tested in simulation were adjusted to verify their functionality in the given environment through the robots, thus carrying out the validation process of the proposal on a physical scenario. As a result, the robots achieved coordinated movements based on the implemented multi-agent algorithms.spa
dc.description.degreelevelPregradospa
dc.description.degreenameIngeniero Electronicospa
dc.format.mimetypeapplication/pdfspa
dc.identifier.citationCastiblanco Rey, D. y Vega Otalora, J. F.(2024). Implementación de Rutinas en ROS2 para la Aplicación de Algoritmos Multiagentes en el Movimiento de Robots Tipo Soccer Small Size. [Trabajo de Grado, Universidad Santo Tomás].Repositorio Institucional.spa
dc.identifier.instnameinstname:Universidad Santo Tomásspa
dc.identifier.reponamereponame:Repositorio Institucional Universidad Santo Tomásspa
dc.identifier.repourlrepourl:https://repository.usta.edu.cospa
dc.identifier.urihttp://hdl.handle.net/11634/58975
dc.language.isospaspa
dc.publisherUniversidad Santo Tomásspa
dc.publisher.facultyFacultad de Ingeniería Electrónicaspa
dc.publisher.programPregrado Ingeniería Electrónicaspa
dc.relation.referencesJ. G. Guarnizo Marin, D. Bautista Díaz y J. S. Sierra Torres, «Una revisión sobre la evolu- ción de la robótica móvil», 2021.spa
dc.relation.referencesS. G. Tzafestas, «Mobile robot control and navigation: A global overview», Journal of In- telligent & Robotic Systems, vol. 91, págs. 35-58, 2018.spa
dc.relation.referencesF. Rubio, F. Valero y C. Llopis-Albert, «A review of mobile robots: Concepts, methods, theoretical framework, and applications», International Journal of Advanced Robotic Sys- tems, vol. 16, n.o 2, pág. 1 729 881 419 839 596, 2019.spa
dc.relation.referencesK. Zhang, Z. Yang y T. Ba¸sar, «Multi-agent reinforcement learning: A selective overview of theories and algorithms», Handbook of reinforcement learning and control, págs. 321-384, 2021.spa
dc.relation.referencesA. Oroojlooy y D. Hajinezhad, «A review of cooperative multi-agent deep reinforcement learning», Applied Intelligence, vol. 53, n.o 11, págs. 13 677-13 722, 2023.spa
dc.relation.referencesI. F. of Robotics, Top 5 Robot Trends 2023, https://ifr.org/ifr-press-releases/ news/top-5-robot-trends-2023, 2023.spa
dc.relation.referencesJ. A. Gurzoni, M. F. Martins, F. Tonidandel y R. A. Bianchi, «On the construction of a Robo- Cup small size league team», Journal of the Brazilian Computer Society, vol. 17, págs. 69-82, 2011.spa
dc.relation.referencesC. Camacho, C. Higuera, J. Guarnizo, Y. Suarez y N. Garzon, STOx’s Team description paper 2018. ene. de 2018.spa
dc.relation.referencesRobot Operating System, https://www.ros.org, 2021.spa
dc.relation.referencesD. McNulty, A. Hennessy, M. Li, E. Armstrong y K. M. Ryan, «A review of Li-ion batteries for autonomous mobile robots: Perspectives and outlook for the future», Journal of Power Sources, vol. 545, pág. 231 943, 2022.spa
dc.relation.referencesA. Loganathan y N. S. Ahmad, «A systematic review on recent advances in autono- mous mobile robot navigation», Engineering Science and Technology, an International Jour- nal, vol. 40, pág. 101 343, 2023.spa
dc.relation.referencesG. Fragapane, R. De Koster, F. Sgarbossa y J. O. Strandhagen, «Planning and control of autonomous mobile robots for intralogistics: Literature review and research agenda», Eu- ropean Journal of Operational Research, vol. 294, n.o 2, págs. 405-426, 2021.spa
dc.relation.referencesS. Macenski, T. Foote, B. Gerkey, C. Lalancette y W. Woodall, «Robot Operating System 2: Design, architecture, and uses in the wild», Science Robotics, vol. 7, n.o 66, eabm6074, 2022.spa
dc.relation.referencesA. J. Lee, W. Song, B. Yu, D. Choi, C. Tirtawardhana y H. Myung, «Survey of robotics technologies for civil infrastructure inspection», Journal of Infrastructure Intelligence and Resilience, vol. 2, n.o 1, pág. 100 018, 2023.spa
dc.relation.referencesY. Bai, B. Zhang, N. Xu, J. Zhou, J. Shi y Z. Diao, «Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review», Computers and Electronics in Agriculture, vol. 205, pág. 107 584, 2023.spa
dc.relation.referencesM. Tavakoli, J. Carriere y A. Torabi, «Robotics, smart wearable technologies, and auto- nomous intelligent systems for healthcare during the COVID-19 pandemic: An analy- sis of the state of the art and future vision», Advanced Intelligent Systems, vol. 2, n.o 7, pág. 2 000 071, 2020.spa
dc.relation.referencesJ. G. Guarnizo y M. M. Arteche, «Robot soccer strategy based on hierarchical finite sta- te machine to centralized architectures», IEEE Latin America Transactions, vol. 14, n.o 8, págs. 3586-3596, 2016.spa
dc.relation.referencesE. Antonioni, V. Suriani, F. Riccio y D. Nardi, «Game strategies for physical robot soccer players: a survey», IEEE Transactions on Games, vol. 13, n.o 4, págs. 342-357, 2021.spa
dc.relation.referencesJ. G. Guarnizo, C. L. Trujillo y N. L. Díaz, «Fútbol de robots: orígenes, federaciones, ligas y horizontes de investigación-Robot Soccer: Origins, Federations, Leagues and Research Horizons», Ingenium Revista de la facultad de ingeniería, vol. 17, n.o 33, págs. 54-67, 2016.spa
dc.relation.referencesA. Weitzenfeld, J. Biswas, M. Akar y K. Sukvichai, «Robocup small-size league: Past, pre- sent and future», en RoboCup 2014: Robot World Cup XVIII 18, Springer, 2015, págs. 611-623.spa
dc.relation.referencesY. Jiang, X. Li, H. Luo, S. Yin y O. Kaynak, «Quo vadis artificial intelligence?», Discover Artificial Intelligence, vol. 2, n.o 1, pág. 4, 2022.spa
dc.relation.referencesP. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold y P. M. Atkinson, «Explainable artifi- cial intelligence: an analytical review», Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, n.o 5, e1424, 2021.spa
dc.relation.referencesQ. Zhang, L. T. Yang, Z. Chen y P. Li, «A survey on deep learning for big data», Information Fusion, vol. 42, págs. 146-157, 2018.spa
dc.relation.referencesM. Gheisari, G. Wang y M. Z. A. Bhuiyan, «A survey on deep learning in big data», en 2017 IEEE international conference on computational science and engineering (CSE) and IEEE international conference on embedded and ubiquitous computing (EUC), IEEE, vol. 2, 2017, págs. 173-180.spa
dc.relation.referencesS. F. Nasim, M. R. Ali y U. Kulsoom, «Artificial intelligence incidents & ethics a narrati- ve review», International Journal of Technology, Innovation and Management (IJTIM), vol. 2, n.o 2, págs. 52-64, 2022.spa
dc.relation.referencesL. H. Kaack, P. L. Donti, E. Strubell, G. Kamiya, F. Creutzig y D. Rolnick, «Aligning ar- tificial intelligence with climate change mitigation», Nature Climate Change, vol. 12, n.o 6, págs. 518-527, 2022.spa
dc.relation.referencesA. G. Marcos, F. J. M. de Pisón Ascacíbar, F. A. Elías, M. C. Limas, J. B. O. Meré, E. P. V. González et al., «Técnicas y algoritmos básicos de visión artificial», Técnicas y Algoritmos Básicos de Visión Artificial, 2006.spa
dc.relation.referencesA. Voulodimos, N. Doulamis, A. Doulamis y E. Protopapadakis, «Deep learning for com- puter vision: A brief review», Computational intelligence and neuroscience, vol. 2018, n.o 1, pág. 7 068 349, 2018.spa
dc.relation.referencesN. O’Mahony, S. Campbell, A. Carvalho et al., «Deep learning vs. traditional computer vision», en Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Volume 1 1, Springer, 2020, págs. 128-144.spa
dc.relation.referencesM. A. Ponti, L. S. F. Ribeiro, T. S. Nazare, T. Bui y J. Collomosse, «Everything you wanted to know about deep learning for computer vision but were afraid to ask», en 2017 30th SIBGRAPI conference on graphics, patterns and images tutorials (SIBGRAPI-T), IEEE, 2017, págs. 17-41.spa
dc.relation.referencesJ. Janai, F. Güney, A. Behl, A. Geiger et al., «Computer vision for autonomous vehicles: Problems, datasets and state of the art», Foundations and Trends® in Computer Graphics and Vision, vol. 12, n.o 1–3, págs. 1-308, 2020.spa
dc.relation.referencesA. Dorri, S. S. Kanhere y R. Jurdak, «Multi-agent systems: A survey», Ieee Access, vol. 6, págs. 28 573-28 593, 2018.spa
dc.relation.referencesS. Gronauer y K. Diepold, «Multi-agent deep reinforcement learning: a survey», Artificial Intelligence Review, vol. 55, n.o 2, págs. 895-943, 2022.spa
dc.relation.referencesE. Er˝os, M. Dahl, K. Bengtsson, A. Hanna y P. Falkman, «A ROS2 based communication architecture for control in collaborative and intelligent automation systems», Procedia Ma- nufacturing, vol. 38, págs. 349-357, 2019.spa
dc.relation.referencesM. Quigley, B. Gerkey y W. D. Smart, Programming Robots with ROS: a practical introduction to the Robot Operating System. " O’Reilly Media, Inc.", 2015.spa
dc.relation.referencesZ. Gao, T. Wanyama, I. Singh, A. Gadhrri y R. Schmidt, «From industry 4.0 to robotics 4.0-a conceptual framework for collaborative and intelligent robotic systems», Procedia manufacturing, vol. 46, págs. 591-599, 2020.spa
dc.relation.referencesE. C. Camacho, C. Higuera, J. G. Guarnizo, Y. Suarez, N. Garzon y N. Bonifaz, «STOx’s Team description paper 2018»,spa
dc.relation.referencesS. Rodríguez, E. Rojas, K. Pérez, J. Lopez-Jimenez, C. Quintero y J. Calderon, «STOx’s 2014 Extended Team Description Paper», jul. de 2014.spa
dc.relation.referencesJ. Qian, B. Zi, D. Wang, Y. Ma y D. Zhang, «The design and development of an omni- directional mobile robot oriented to an intelligent manufacturing system», Sensors, vol. 17, n.o 9, pág. 2073, 2017.spa
dc.relation.referencesM. R. Azizi, A. Rastegarpanah y R. Stolkin, «Motion planning and control of an omnidi- rectional mobile robot in dynamic environments», Robotics, vol. 10, n.o 1, pág. 48, 2021.spa
dc.relation.referencesA. Sheikhlar y A. Fakharian, «Adaptive optimal control via reinforcement learning for omni-directional wheeled robots», en 2016 4th International Conference on Control, Instru- mentation, and Automation (ICCIA), IEEE, 2016, págs. 208-213.spa
dc.relation.referencesM. A. Khalighi y M. Uysal, «Survey on free space optical communication: A communica- tion theory perspective», IEEE communications surveys & tutorials, vol. 16, n.o 4, págs. 2231-2258, 2014.spa
dc.relation.referencesA. J. Paulraj, D. A. Gore, R. U. Nabar y H. Bolcskei, «An overview of MIMO communications- a key to gigabit wireless», Proceedings of the IEEE, vol. 92, n.o 2, págs. 198-218, 2004.spa
dc.relation.referencesA. E. Willner, Y. Ren, G. Xie et al., «Recent advances in high-capacity free-space op- tical and radio-frequency communications using orbital angular momentum multiple- xing», Philosophical Transactions of the Royal Society A: Mathematical, Physical and Enginee- ring Sciences, vol. 375, n.o 2087, pág. 20 150 439, 2017.spa
dc.relation.referencesY. J. Lee, I. Atkinson, J. Trevathan, W. Read y T. Myers, «An Intelligent Agent System for Managing Heterogeneous Sensors in Dispersed and Disparate Wireless Sensor Network Systems»,spa
dc.relation.referencesY. Yang, Y. Xiao y T. Li, «Attacks on formation control for multiagent systems», IEEE Transactions on Cybernetics, vol. 52, n.o 12, págs. 12 805-12 817, 2021.spa
dc.relation.referencesF. Derakhshan y S. Yousefi, «A review on the applications of multiagent systems in wi- reless sensor networks», International Journal of Distributed Sensor Networks, vol. 15, n.o 5, pág. 1 550 147 719 850 767, 2019.spa
dc.relation.referencesY. Li y C. Tan, «A survey of the consensus for multi-agent systems», Systems Science & Control Engineering, vol. 7, n.o 1, págs. 468-482, 2019.spa
dc.relation.referencesR. Olfati-Saber, J. A. Fax y R. M. Murray, «Consensus and cooperation in networked multi-agent systems», Proceedings of the IEEE, vol. 95, n.o 1, págs. 215-233, 2007.spa
dc.relation.referencesY. Zheng, J. Ma y L. Wang, «Consensus of hybrid multi-agent systems», IEEE transactions on neural networks and learning systems, vol. 29, n.o 4, págs. 1359-1365, 2017.spa
dc.relation.referencesH. Zhang y S. Liyanage, «Finite-time formation control for multi-agent systems underl- ying heterogeneous communication typologies», en 2020 IEEE/ASME International Confe- rence on Advanced Intelligent Mechatronics (AIM), IEEE, 2020, págs. 1441-1446.spa
dc.relation.referencesL. Xue, Y. Liu, Z.-Q. Gu, Z.-H. Li y X.-P. Guan, «Joint design of clustering and in-cluster data route for heterogeneous wireless sensor networks», International Journal of Automa- tion and Computing, vol. 14, n.o 6, págs. 637-649, 2017.spa
dc.relation.referencesJ. R. Marden, G. Arslan y J. S. Shamma, «Cooperative Control and Potential Games», IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, n.o 6, págs. 1393-1407, 2009. DOI: 10.1109/TSMCB.2009.2017273.spa
dc.relation.referencesA. C. Jiménez, V. García-Díaz y S. Bolaños, «A decentralized framework for multi-agent robotic systems», Sensors, vol. 18, n.o 2, pág. 417, 2018.spa
dc.relation.referencesB. Xin, G.-Q. Gao, Y.-L. Ding, Y.-G. Zhu y H. Fang, «Distributed multi-robot motion plan- ning for cooperative multi-area coverage», en 2017 13th IEEE International Conference on Control & Automation (ICCA), IEEE, 2017, págs. 361-366.spa
dc.relation.referencesG. Philip, S. Givigi y H. Schwartz, «Multi-robot exploration using potential games», ago. de 2013, págs. 1203-1210, ISBN: 978-1-4673-5557-5. DOI: 10 . 1109 / ICMA . 2013 . 6618085.spa
dc.relation.referencesY. L. Lim, «Potential game based cooperative control in dynamic environments», 2011.spa
dc.relation.referencesD. S. Leslie y J. R. Marden, «Equilibrium selection in potential games with noisy re- wards», en International Conference on NETwork Games, Control and Optimization (NetGCooP 2011), 2011, págs. 1-4.spa
dc.relation.referencesJ. K. Gupta, M. Egorov y M. Kochenderfer, «Cooperative multi-agent control using deep reinforcement learning», en Autonomous Agents and Multiagent Systems: AAMAS 2017 Workshops, Best Papers, São Paulo, Brazil, May 8-12, 2017, Revised Selected Papers 16, Sprin- ger, 2017, págs. 66-83.spa
dc.relation.referencesJ. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor y S. Levine, «How to train your robot with deep reinforcement learning: lessons we have learned», The International Journal of Robotics Research, vol. 40, n.o 4-5, págs. 698-721, 2021.spa
dc.relation.referencesT. Johannink, S. Bahl, A. Nair et al., «Residual reinforcement learning for robot control», en 2019 international conference on robotics and automation (ICRA), IEEE, 2019, págs. 6023-6029.spa
dc.relation.referencesT. Zhang y H. Mo, «Reinforcement learning for robot research: A comprehensive re- view and open issues», International Journal of Advanced Robotic Systems, vol. 18, n.o 3, pág. 17 298 814 211 007 305, 2021.spa
dc.relation.referencesJ. Kober, J. Bagnell y J. Peters, «Reinforcement Learning in Robotics: A Survey», The Inter- national Journal of Robotics Research, vol. 32, págs. 1238-1274, sep. de 2013. DOI: 10.1177/ 0278364913495721.spa
dc.relation.referencesH. Bae, G. Kim, J. Kim, D. Qian y S. Lee, «Multi-robot path planning method using rein- forcement learning», Applied sciences, vol. 9, n.o 15, pág. 3057, 2019.spa
dc.relation.referencesC. Russo, H. Ramón, L. Cicerchia et al., «Visión artificial aplicada en agricultura de pre- cisión», 2018.spa
dc.relation.referencesL. Zhou, L. Zhang y N. Konz, «Computer vision techniques in manufacturing», IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, n.o 1, págs. 105-117, 2022.spa
dc.relation.referencesZ. Liu, H. Ukida, P. Ramuhalli y D. Forsyth, «Integrated imaging and vision techniques for industrial inspection: A special issue on machine vision and applications», Mach. Vis. Appl., vol. 21, págs. 597-599, ago. de 2010. DOI: 10.1007/s00138-010-0277-9.spa
dc.relation.referencesA. M. Al-Oraiqat, T. Smirnova, O. Drieiev et al., «Method for Determining Treated Metal Surface Quality Using Computer Vision Technology», Sensors, vol. 22, n.o 16, 2022, ISSN: 1424-8220. DOI: 10.3390/s22166223. dirección: https://www.mdpi.com/1424- 8220/22/16/6223.spa
dc.relation.referencesY. Matsuzaka y R. Yashiro, «AI-based computer vision techniques and expert systems», AI, vol. 4, n.o 1, págs. 289-302, 2023.spa
dc.relation.referencesC. Patruno, V. Renò, M. Nitti, N. Mosca, M. di Summa y E. Stella, «Vision-based omni- directional indoor robots for autonomous navigation and localization in manufacturing industry», Heliyon, 2024.spa
dc.relation.referencesG. Nirmala, S. Geetha y S. Selvakumar, «Mobile robot localization and navigation in arti- ficial intelligence: Survey», Computational Methods in Social Sciences, vol. 4, n.o 2, pág. 12, 2016.spa
dc.relation.referencesS. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz y D. Terzopoulos, «Image seg- mentation using deep learning: A survey», IEEE transactions on pattern analysis and machi- ne intelligence, vol. 44, n.o 7, págs. 3523-3542, 2021.spa
dc.relation.referencesY. Guo, Y. Liu, T. Georgiou y M. S. Lew, «A review of semantic segmentation using deep neural networks», International journal of multimedia information retrieval, vol. 7, págs. 87-93, 2018.spa
dc.relation.referencesS. Peng, W. Jiang, H. Pi, X. Li, H. Bao y X. Zhou, «Deep snake for real-time instance segmentation», en Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, págs. 8533-8542.spa
dc.relation.referencesF. Garcia-Lamont, J. Cervantes, A. López y L. Rodriguez, «Segmentation of images by color features: A survey», Neurocomputing, vol. 292, págs. 1-27, 2018.spa
dc.relation.referencesLighthouse Positioning System, https://www.bitcraze.io/documentation/system/ positioning/ligthouse-positioning-system/, 2015.spa
dc.relation.referencesLighthouse Positioning System, https://github.com/edgarcamilocamacho/promocionUsta/ blob/master/futbolistas_joystick/promocion.pyL170/.spa
dc.relation.referencesD. A. Martínez, E. Mojica-Nava, K. Watson y T. Usländer, «Multiagent Self-Redundancy Identification and Tuned Greedy-Exploration», IEEE Transactions on Cybernetics, vol. 52, n.o 7, págs. 5744-5755, 2022. DOI: 10.1109/TCYB.2020.3035783.spa
dc.relation.referencesD. Martínez y E. Mojica-Nava, «Distortion based potential game for distributed cove- rage control», Information Sciences, vol. 600, págs. 209-225, 2022, ISSN: 0020-0255. DOI: https://doi.org/10.1016/j.ins.2022.03.090. dirección: https://www. sciencedirect.com/science/article/pii/S0020025522003176.spa
dc.rightsAtribución-NoComercial-SinDerivadas 2.5 Colombia*
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.rights.coarhttp://purl.org/coar/access_right/c_abf2spa
dc.rights.localAbierto (Texto Completo)spa
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/2.5/co/*
dc.subject.lembIngeniería Electrónicaspa
dc.subject.lembIngenieríaspa
dc.subject.lembElectrónicaspa
dc.titleImplementación de Rutinas en ROS2 para la Aplicación de Algoritmos Multiagentes en el Movimiento de Robots Tipo Soccer Small Size.spa
dc.type.coarhttp://purl.org/coar/resource_type/c_7a1f
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.driveinfo:eu-repo/semantics/bachelorThesis
dc.type.localTrabajo de gradospa
dc.type.versioninfo:eu-repo/semantics/acceptedVersion

Archivos

Bloque original

Mostrando 1 - 3 de 3
Cargando...
Miniatura
Nombre:
2024danielcastiblanco.pdf
Tamaño:
6.16 MB
Formato:
Adobe Portable Document Format
Descripción:
Thumbnail USTA
Nombre:
2024cartadederechosdeautor.pdf
Tamaño:
1.03 MB
Formato:
Adobe Portable Document Format
Descripción:
Thumbnail USTA
Nombre:
2024cartadefacultad.pdf
Tamaño:
38.31 KB
Formato:
Adobe Portable Document Format
Descripción:

Bloque de licencias

Mostrando 1 - 1 de 1
Thumbnail USTA
Nombre:
license.txt
Tamaño:
807 B
Formato:
Item-specific license agreed upon to submission
Descripción: