-
FPT - Complex occlusion disambiguation
Fast People Tracking, developed at Visilab Research Center of the University of Messina by Gianpaolo Ingegneri.
The first aim is to implement fast algorithms of background subtraction and object tracking to work realtime also with embedded systems, like iMote2 XScalar ARM Linux.
published: 13 Jul 2011
-
Tracking 10 peoples position in real time with disambiguation algorithm
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
published: 10 Jan 2010
-
User Interaction Models for Disambiguation in Programming by Example
User Interaction Models for Disambiguation in Programming by Example
Mikaël Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, Ben Zorn, Sumit Gulwani
Abstract:
Programming by Examples (PBE) has the potential to revolutionize end-user programming by \ enabling end users, most of whom are non-programmers, to create small scripts for automating \ repetitive tasks. \ However, examples, though often easy to provide, are an ambiguous specification of the user's intent. \ Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of \ the program that was synthesized by the system. \ We present two novel user interaction models that communicate actionable information to the user to help resolve amb...
published: 25 Oct 2015
-
Tracking 6 peoples position in real time with disambiguation algorithm
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
published: 10 Jan 2010
-
Disambiguation of Imprecise Input with One-dimensional Rotational Text Entry
Full Title:
Disambiguation of Imprecise Input with One-dimensional Rotational Text Entry
Authors:
Will Walmsley, W. Xavier Snelgrove, Khai N Truong
Abstract:
We introduce a distinction between disambiguation supporting continuous vs. discrete ambiguous text entry. With continuous ambiguous text entry methods, letter selections are treated as ambiguous due to expected imprecision rather than due to discretized letter groupings. We investigate the simple case of a one-dimensional character layout to demonstrate the potential of techniques designed for imprecise entry. Our rotation-based sight-free technique, Rotext, maps device orientation to a layout optimized for disambiguation, motor efficiency, and learnability. We also present an audio feedback system for efficient selection of d...
published: 26 Mar 2014
-
Jon Wiggins - Understanding the News around the World with Web Scraping and NLP at Scale | PyData
www.pydata.org
Everyday, media companies around the world publish millions of articles spanning multiple languages, and at Chartbeat we process this data to understand what is driving reader engagement. In this talk we discuss real-world lessons learned in building a production pipeline for scraping and extracting metadata in real time from this multitude of news articles. The pipeline leverages a mix of pre-trained and custom-built machine learning models in Python for content extraction, natural language processing, categorization, translation, and entity linking, enabling availability of metadata for an article in just three seconds on average.
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the internat...
published: 23 Jan 2023
-
Disambiguation Techniques for Freehand Object Manipulations in VR | IEEEVR 2020 Presentation
Presentation at IEEEVR 2020 for "Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality"
ABSTRACT
Manipulating virtual objects using bare hands has been an
attractive interaction paradigm in virtual and augmented reality due
to its intuitive nature. However, one limitation of freehand input
lies in the ambiguous resulting effect of the interaction. The same
gesture performed on a virtual object could invoke different
operations on the object depending on the context, object
properties, and user intention. We present an experimental analysis
of a set of disambiguation techniques in a virtual reality
environment, comparing three input modalities (head gaze, speech,
and foot tap) paired with three different timings in which options
appear to resolve ambiguity (before,...
published: 09 Oct 2023
-
Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality (IEEE VR 2020)
Manipulating virtual objects using bare hands has been an attractive interaction paradigm in VR and AR. However, one limitation of freehand input lies in the ambiguous resulting effect of the interaction, as the same gesture performed on a virtual object could invoke different operations. We present an experimental analysis of a set of disambiguation techniques in VR, comparing three input modalities (head gaze, speech, and foot tap) paired with three different timings to resolve ambiguity (before, during, and after an interaction). The results indicate that using head gaze for disambiguation during an interaction with the object achieved the best performance.
===
Di (Laura) Chen, Ravin Balakrishnan, Tovi Grossman. 2020. Disambiguation Techniques for Freehand Object Manipulations in Virt...
published: 08 Apr 2020
-
Interactive disambiguation of object references for grasping tasks
Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size, elongation, position). Ambigue references to objects will be resolved in an interactive dialogue asking for the most informative object property in a given situation. Ultimately pointing gestures can be used to resolve a reference. The robot system is able to pick and place objects to a new target location (which might be changing as well), to hand over an object to the user, and to talk about the current scene state.
[1] A. Ückermann, R. Haschke, and H. Ritter, "Realtime 3D segmentation for human-robot interaction," in Proc. IROS, 2013, pp. 2136--2143.
published: 18 Jul 2014
-
Pointing Based Object Recognition and Disambiguation for Autonomous Service Robots - Test Videos
published: 05 May 2021
0:28
FPT - Complex occlusion disambiguation
Fast People Tracking, developed at Visilab Research Center of the University of Messina by Gianpaolo Ingegneri.
The first aim is to implement fast algorithms ...
Fast People Tracking, developed at Visilab Research Center of the University of Messina by Gianpaolo Ingegneri.
The first aim is to implement fast algorithms of background subtraction and object tracking to work realtime also with embedded systems, like iMote2 XScalar ARM Linux.
https://wn.com/Fpt_Complex_Occlusion_Disambiguation
Fast People Tracking, developed at Visilab Research Center of the University of Messina by Gianpaolo Ingegneri.
The first aim is to implement fast algorithms of background subtraction and object tracking to work realtime also with embedded systems, like iMote2 XScalar ARM Linux.
- published: 13 Jul 2011
- views: 192
1:00
Tracking 10 peoples position in real time with disambiguation algorithm
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
https://wn.com/Tracking_10_Peoples_Position_In_Real_Time_With_Disambiguation_Algorithm
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
- published: 10 Jan 2010
- views: 90
0:30
User Interaction Models for Disambiguation in Programming by Example
User Interaction Models for Disambiguation in Programming by Example
Mikaël Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishab...
User Interaction Models for Disambiguation in Programming by Example
Mikaël Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, Ben Zorn, Sumit Gulwani
Abstract:
Programming by Examples (PBE) has the potential to revolutionize end-user programming by \ enabling end users, most of whom are non-programmers, to create small scripts for automating \ repetitive tasks. \ However, examples, though often easy to provide, are an ambiguous specification of the user's intent. \ Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of \ the program that was synthesized by the system. \ We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. \ One of these models allows the user to effectively navigate between the huge set of \ programs that are consistent with the examples provided by the user. \ The other model uses active learning to ask directed example-based questions to the user on the test input data over \ which the user intends to run the synthesized program. \ Our user studies show that \ each of these models significantly reduces the number of errors in the performed task without any difference in completion time. \ Moreover, both models are perceived as useful, \ and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.
ACM DL: http://dl.acm.org/citation.cfm?id=2807459
DOI: http://dx.doi.org/10.1145/2807442.2807459
https://wn.com/User_Interaction_Models_For_Disambiguation_In_Programming_By_Example
User Interaction Models for Disambiguation in Programming by Example
Mikaël Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, Ben Zorn, Sumit Gulwani
Abstract:
Programming by Examples (PBE) has the potential to revolutionize end-user programming by \ enabling end users, most of whom are non-programmers, to create small scripts for automating \ repetitive tasks. \ However, examples, though often easy to provide, are an ambiguous specification of the user's intent. \ Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of \ the program that was synthesized by the system. \ We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. \ One of these models allows the user to effectively navigate between the huge set of \ programs that are consistent with the examples provided by the user. \ The other model uses active learning to ask directed example-based questions to the user on the test input data over \ which the user intends to run the synthesized program. \ Our user studies show that \ each of these models significantly reduces the number of errors in the performed task without any difference in completion time. \ Moreover, both models are perceived as useful, \ and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.
ACM DL: http://dl.acm.org/citation.cfm?id=2807459
DOI: http://dx.doi.org/10.1145/2807442.2807459
- published: 25 Oct 2015
- views: 839
1:00
Tracking 6 peoples position in real time with disambiguation algorithm
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
https://wn.com/Tracking_6_Peoples_Position_In_Real_Time_With_Disambiguation_Algorithm
Towards Cooperative Localization of Wearable Sensors using Accelerometers and Cameras
- published: 10 Jan 2010
- views: 57
0:31
Disambiguation of Imprecise Input with One-dimensional Rotational Text Entry
Full Title:
Disambiguation of Imprecise Input with One-dimensional Rotational Text Entry
Authors:
Will Walmsley, W. Xavier Snelgrove, Khai N Truong
Abstrac...
Full Title:
Disambiguation of Imprecise Input with One-dimensional Rotational Text Entry
Authors:
Will Walmsley, W. Xavier Snelgrove, Khai N Truong
Abstract:
We introduce a distinction between disambiguation supporting continuous vs. discrete ambiguous text entry. With continuous ambiguous text entry methods, letter selections are treated as ambiguous due to expected imprecision rather than due to discretized letter groupings. We investigate the simple case of a one-dimensional character layout to demonstrate the potential of techniques designed for imprecise entry. Our rotation-based sight-free technique, Rotext, maps device orientation to a layout optimized for disambiguation, motor efficiency, and learnability. We also present an audio feedback system for efficient selection of disambiguated word candidates, and explore the role that time spent acknowledging word-level feedback plays in text entry performance. Through a user study, we show that despite missing on average by 2.46--2.92 character positions, with the aid of a maximum a posteriori (MAP) disambiguation algorithm, users can average sight-free entry speed of 12.6 wpm with 98.9% accuracy within 13 sessions (4.3 hours). In a second study, expert users are found to reach 21 wpm with 99.6% accuracy after session 20 (6.7 hours) and continue to grow in performance, with individual phrases entered at up to 37 wpm. A final study revisits the learnability of the optimized layout. Our modelling of ultimate performance indicates maximum overall sight-free entry speeds of 29.0 wpm with audio feedback, or 40.7 wpm if an expert user could operate without relying on audio feedback.
https://wn.com/Disambiguation_Of_Imprecise_Input_With_One_Dimensional_Rotational_Text_Entry
Full Title:
Disambiguation of Imprecise Input with One-dimensional Rotational Text Entry
Authors:
Will Walmsley, W. Xavier Snelgrove, Khai N Truong
Abstract:
We introduce a distinction between disambiguation supporting continuous vs. discrete ambiguous text entry. With continuous ambiguous text entry methods, letter selections are treated as ambiguous due to expected imprecision rather than due to discretized letter groupings. We investigate the simple case of a one-dimensional character layout to demonstrate the potential of techniques designed for imprecise entry. Our rotation-based sight-free technique, Rotext, maps device orientation to a layout optimized for disambiguation, motor efficiency, and learnability. We also present an audio feedback system for efficient selection of disambiguated word candidates, and explore the role that time spent acknowledging word-level feedback plays in text entry performance. Through a user study, we show that despite missing on average by 2.46--2.92 character positions, with the aid of a maximum a posteriori (MAP) disambiguation algorithm, users can average sight-free entry speed of 12.6 wpm with 98.9% accuracy within 13 sessions (4.3 hours). In a second study, expert users are found to reach 21 wpm with 99.6% accuracy after session 20 (6.7 hours) and continue to grow in performance, with individual phrases entered at up to 37 wpm. A final study revisits the learnability of the optimized layout. Our modelling of ultimate performance indicates maximum overall sight-free entry speeds of 29.0 wpm with audio feedback, or 40.7 wpm if an expert user could operate without relying on audio feedback.
- published: 26 Mar 2014
- views: 432
35:15
Jon Wiggins - Understanding the News around the World with Web Scraping and NLP at Scale | PyData
www.pydata.org
Everyday, media companies around the world publish millions of articles spanning multiple languages, and at Chartbeat we process this data to un...
www.pydata.org
Everyday, media companies around the world publish millions of articles spanning multiple languages, and at Chartbeat we process this data to understand what is driving reader engagement. In this talk we discuss real-world lessons learned in building a production pipeline for scraping and extracting metadata in real time from this multitude of news articles. The pipeline leverages a mix of pre-trained and custom-built machine learning models in Python for content extraction, natural language processing, categorization, translation, and entity linking, enabling availability of metadata for an article in just three seconds on average.
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
00:00 Welcome!
00:10 Help us add time stamps or captions to this video! See the description for details.
Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/numfocus/YouTubeVideoTimestamps
https://wn.com/Jon_Wiggins_Understanding_The_News_Around_The_World_With_Web_Scraping_And_Nlp_At_Scale_|_Pydata
www.pydata.org
Everyday, media companies around the world publish millions of articles spanning multiple languages, and at Chartbeat we process this data to understand what is driving reader engagement. In this talk we discuss real-world lessons learned in building a production pipeline for scraping and extracting metadata in real time from this multitude of news articles. The pipeline leverages a mix of pre-trained and custom-built machine learning models in Python for content extraction, natural language processing, categorization, translation, and entity linking, enabling availability of metadata for an article in just three seconds on average.
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
00:00 Welcome!
00:10 Help us add time stamps or captions to this video! See the description for details.
Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/numfocus/YouTubeVideoTimestamps
- published: 23 Jan 2023
- views: 1205
14:24
Disambiguation Techniques for Freehand Object Manipulations in VR | IEEEVR 2020 Presentation
Presentation at IEEEVR 2020 for "Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality"
ABSTRACT
Manipulating virtual objects using ba...
Presentation at IEEEVR 2020 for "Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality"
ABSTRACT
Manipulating virtual objects using bare hands has been an
attractive interaction paradigm in virtual and augmented reality due
to its intuitive nature. However, one limitation of freehand input
lies in the ambiguous resulting effect of the interaction. The same
gesture performed on a virtual object could invoke different
operations on the object depending on the context, object
properties, and user intention. We present an experimental analysis
of a set of disambiguation techniques in a virtual reality
environment, comparing three input modalities (head gaze, speech,
and foot tap) paired with three different timings in which options
appear to resolve ambiguity (before, during, and after
an interaction). The results indicate that using head gaze for
disambiguation during an interaction with the object achieved the
best performance.
D. L. Chen, R. Balakrishnan and T. Grossman, "Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 2020, pp. 285-292, doi: 10.1109/VR46266.2020.00048.
https://wn.com/Disambiguation_Techniques_For_Freehand_Object_Manipulations_In_Vr_|_Ieeevr_2020_Presentation
Presentation at IEEEVR 2020 for "Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality"
ABSTRACT
Manipulating virtual objects using bare hands has been an
attractive interaction paradigm in virtual and augmented reality due
to its intuitive nature. However, one limitation of freehand input
lies in the ambiguous resulting effect of the interaction. The same
gesture performed on a virtual object could invoke different
operations on the object depending on the context, object
properties, and user intention. We present an experimental analysis
of a set of disambiguation techniques in a virtual reality
environment, comparing three input modalities (head gaze, speech,
and foot tap) paired with three different timings in which options
appear to resolve ambiguity (before, during, and after
an interaction). The results indicate that using head gaze for
disambiguation during an interaction with the object achieved the
best performance.
D. L. Chen, R. Balakrishnan and T. Grossman, "Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 2020, pp. 285-292, doi: 10.1109/VR46266.2020.00048.
- published: 09 Oct 2023
- views: 5
4:04
Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality (IEEE VR 2020)
Manipulating virtual objects using bare hands has been an attractive interaction paradigm in VR and AR. However, one limitation of freehand input lies in the am...
Manipulating virtual objects using bare hands has been an attractive interaction paradigm in VR and AR. However, one limitation of freehand input lies in the ambiguous resulting effect of the interaction, as the same gesture performed on a virtual object could invoke different operations. We present an experimental analysis of a set of disambiguation techniques in VR, comparing three input modalities (head gaze, speech, and foot tap) paired with three different timings to resolve ambiguity (before, during, and after an interaction). The results indicate that using head gaze for disambiguation during an interaction with the object achieved the best performance.
===
Di (Laura) Chen, Ravin Balakrishnan, Tovi Grossman. 2020. Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality. IEEE Conference on Virtual Reality and 3D User Interfaces.
https://www.dgp.toronto.edu/
https://wn.com/Disambiguation_Techniques_For_Freehand_Object_Manipulations_In_Virtual_Reality_(Ieee_Vr_2020)
Manipulating virtual objects using bare hands has been an attractive interaction paradigm in VR and AR. However, one limitation of freehand input lies in the ambiguous resulting effect of the interaction, as the same gesture performed on a virtual object could invoke different operations. We present an experimental analysis of a set of disambiguation techniques in VR, comparing three input modalities (head gaze, speech, and foot tap) paired with three different timings to resolve ambiguity (before, during, and after an interaction). The results indicate that using head gaze for disambiguation during an interaction with the object achieved the best performance.
===
Di (Laura) Chen, Ravin Balakrishnan, Tovi Grossman. 2020. Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality. IEEE Conference on Virtual Reality and 3D User Interfaces.
https://www.dgp.toronto.edu/
- published: 08 Apr 2020
- views: 352
2:02
Interactive disambiguation of object references for grasping tasks
Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects an...
Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size, elongation, position). Ambigue references to objects will be resolved in an interactive dialogue asking for the most informative object property in a given situation. Ultimately pointing gestures can be used to resolve a reference. The robot system is able to pick and place objects to a new target location (which might be changing as well), to hand over an object to the user, and to talk about the current scene state.
[1] A. Ückermann, R. Haschke, and H. Ritter, "Realtime 3D segmentation for human-robot interaction," in Proc. IROS, 2013, pp. 2136--2143.
https://wn.com/Interactive_Disambiguation_Of_Object_References_For_Grasping_Tasks
Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size, elongation, position). Ambigue references to objects will be resolved in an interactive dialogue asking for the most informative object property in a given situation. Ultimately pointing gestures can be used to resolve a reference. The robot system is able to pick and place objects to a new target location (which might be changing as well), to hand over an object to the user, and to talk about the current scene state.
[1] A. Ückermann, R. Haschke, and H. Ritter, "Realtime 3D segmentation for human-robot interaction," in Proc. IROS, 2013, pp. 2136--2143.
- published: 18 Jul 2014
- views: 821