PhD student Paper @ IEEE VR 2020 Journal Track: Best Paper Nominee

ARC's Visionarium Lab provides world-class support for Viriginia Tech researchers using Virtual and Mixed Reality Visualization. Here is a highlight from the Building and Construction Department taking advantage of the unique HyperCube instrument:

S. Hasanzadeh, N. F. Polys and J. M. De La Garza, "Presence, Mixed Reality, and Risk-Taking Behavior: A Study in Safety Interventions," in IEEE Transactions on Visualization and Computer Graphics.
doi: 10.1109/TVCG.2020.2973055

HyperCube Experimental Conditions

Dr. Hasanzadeh will be joining Purdue's Civil Engineering Faculty this Fall! Congratulations!!

Abstract: Immersive environments have been successfully applied to a broad range of safety training in high-risk domains. However, very little research has used these systems to evaluate the risk-taking behavior of construction workers. In this study, we investigated the feasibility and usefulness of providing passive haptics in a mixed-reality environment to capture the risk-taking behavior of workers, identify at-risk workers, and propose injury-prevention interventions to counteract excessive risk-taking and risk-compensatory behavior. Within a mixed-reality environment in a CAVE-like display system, our subjects installed shingles on a (physical) sloped roof of a (virtual) two-story residential building on a morning in a suburban area. Through this controlled, withinsubject experimental design, we exposed each subject to three experimental conditions by manipulating the level of safety intervention. Workers' subjective reports, physiological signals, psychophysical responses, and reactionary behaviors were then considered as promising measures of Presence. The results showed that our mixed-reality environment was a suitable platform for triggering behavioral changes under different experimental conditions and for evaluating the risk perception and risk-taking behavior of workers in a risk-free setting. These results demonstrated the value of immersive technology to investigate natural human factors.


keywords: {Safety;Haptic interfaces;Training;Virtual environments;Physiology;Human factors;Mixed-reality;passive haptics;presence;human factors;risk-taking behavior;X3D;construction safety},
URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8998361&isnumber=4359476

HyperCube views

3D Enterprise Workshop in Arlington, VA


Virginia Tech and the Web3D Consortium hosted a one-day workshop to provide presentations to Naval enterprise leaders on the use of collaborative Web-based #X3D visualization techniques by Government, Academia, and Industry practitioners. Virginia Tech's Advanced Research Computing (ARC) and Center for Geospatial Information Technology (CGIT) host with proven, innovative methods for data fusion and interactive 3D visualization.

Friday, 2019, December 6 (All day): Virginia Tech Executive Briefing Center, 900 N Glebe Rd, Arlington, VA 22203. Falls Church room.

The workshop fostered networking and showcased current and emerging capabilities for enterprise-scale, networked geo-enabled 3D communications. Increasing shared understanding and technical coordination agility can be feasibly adapted to improve digital connectedness, accelerate consensus/decision making processes across systems engineering, advance planning conceptualization, and collaborative virtual rehearsal for ashore & afloat logistics activities. Building shared priorities adds value for everyone. Workshop participants discussed their reactions "around the room" together, sharing ideas about opportunities and potential synergies that can further the art of the possible to realize shared innovation.

The workshop showcased the benefits and strengths of using the Web3D Consortium’s Extensible 3D (X3D) open 3D standards for the WWW and included:

  • Demonstrations using 3D scanners, 3D software & 3D data storage, and processing proved compelling use cases for how the X3D standard can enable new realities of collaborative 3D visualization for you 
  • Overview of Web3D Standards and their value proposition in the ecology of 3D data representations and file formats 
  • Appreciation of the long-game of technology and information systems investments: born out with this open International Standard (ISO-IEC) and WWW community, including dozens of tools, pipelines, and engines.

Presentations from the workshop are available at the link below.

The Twitter thread includes highlight points, photos, screenshots and links for everyone.  

https://www.web3d.org/event/collaborative-3d-visualization-ashore-afloat-and-expeditionary-readiness-workshop

Immersive Agroforestry

Virginia Tech's Advanced Research Computing supports computational science in all its forms across the university. As the Division of IT, we support and collaborate with our Faculty and Extension Agents across the institution. For example, supporting world-class capabilities in the technologies for 3D capture, simulation, analysis, and interactive visualization of real places over time. The use of rich data layers, computational models, and 3D rendering have ushered in a new era of research, education, and training toward smart (and effective) Geodesign.

Fall 2019 Forestry Class inspects and discusses their design

Dr. John Munsell, Associate Professor and Forest Management Extension Specialist, teaches his FOR 4334 class with a strong project component. Students must apply the design principles and Best Management Practices (BMPs) they have learned to a real location/ site. This process involves many stakeholders and the effective communication of new spatial and agronomic concepts- a perfect opportunity for immersive, interactive 3D communication. The Fall 2019 student class project presentations were held in the HyperCube immersive projection environment in the Visionarium Lab. A video of his 2018 class in the Visionarium Lab can be seen here. Drs. Steven Kruger, Nicholas Polys, and Lance Arsenault collaborated on the data collection, processing, and immersive visualization.

The latest news includes the award of new $ 590,000 grant, led by Dr. Munsell. This award will further support our Land-Grant mission with a new age of impactful Education and Extension work for Appalachian Non-Timber Forest Products (NTFPs).

"The funding will continue federal support of the Appalachian Beginning Forest Farmer Coalition (ABFFC), which is based at Virginia Tech and was initiated in 2015. The project receives funding from a variety of sources in addition to the federal government, and which to date totals $1.5 million including the latest grant via a program of the United States Department of Agriculture. " - https://news.mongabay.com/

Multiple Stakeholders participate in the simulation and discussion with traditional media and an immersive experience

This collaboration has crossed traditional academic boundaries with great success and impact. The team includes experts in GIS, Remote Sensing, Lidar, Drones, Hydrology, and 3D visualization. The VT collaboration was originally funded with Dr. Cully Hession and Peter Sforza by the ICAT SEAD program through the Fusality project. Partners with ARC in the collaboration include the Center for Geospatial Information Technology (CGIT) and the the Stream Lab (BSE) to visualize and monitor VT campus, Stroubles Creek, and the Catawba Sustainability Center.

See also:

  • Polys NF, Sforza P, Hession WC, Munsell J. Extensible experiences: fusality for stream and field. InProceedings of the 21st International Conference on Web3D Technology 2016 Jul 22 (pp. 179-180). ACM.
  • Polys N, Hotter J, Lanier M, Purcell L, Wolf J, Hession WC, Sforza P, Ivory JD. Finding frogs: using game-based learning to increase environmental awareness. InProceedings of the 22nd International Conference on 3D Web Technology 2017 Jun 5 (p. 10). ACM.
  • Polys N, Newcomb C, Schenk T, Skuzinski T, Dunay D. The value of 3D models and immersive technology in planning urban density. InProceedings of the 23rd International ACM Conference on 3D Web Technology 2018 Jun 20 (p. 13). ACM.

Quantifying opinion

https://sciencenode.org/feature/Quantifying%20opinion.php

Dr. Srijith Rajamohan and Alana Romanella are working on a deep-learning based interactive visualization tool to understand and plot political ideologies based on Twitter activity. They have been collecting data over past four months, analyzing roughly 3 millions tweets based on certain hash tags. The goal of their work is to construct a visualization tool that can help identify political ideology.

Color Expert Visits Virginia Tech

Lecture: Sept 23rd at 1 pm in TORG 1100

Color is fundamental component of the world around us and our experience. It plays a key role in the understanding of our environment, from science and engineering to design and art ... but how does it work ? And, what makes digital color different?

This September, renown color and visualization expert Theresa-Marie Rhyne will visit Virginia Tech to share her perspective on the theory and practice of color. With years of experience building notable visualization programs and as the author of the recent book, Applying Color Theory to Digital Media and Visualization, Theresa-Marie will visit with faculty and students as well as give a lecture to a graduate Computer Science class and to the broader University Community. Her talk is scheduled for 1 pm Monday Sept 23rd in Torgersen Hall 1100.

“Color Fundamentals for Digital Content Creation & Visualization"

Her slides are posted HERE!

She even wrote up some color studies about her visit to the VT Visionarium and VR rendering in the HyperCube!


Abstract: We provide an overview of the fundamentals of color theory and approaches to color selection for visualization and exploration. Our journey includes the introduction to the concepts of color models and harmony, a review of color vision principles, the defining of color gamut, spaces and systems, and highlighting online and mobile apps for performing color analyses of digital media. We feature concepts from art and design such as extending the fundamentals of the Bauhaus into digital media as well as review color perception and appearance principals from vision and visualization researchers and practitioners. Newly emerging trends in automated color selection and deep learning colorization are also noted.

About the Instructor:
Theresa-Marie Rhyne is an expert in the field of computer-generated visualization and a consultant who specializes in applying artistic color theories to visualization and digital media. Her book on "Applying Color Theory to Digital Media and Visualization" was published by CRC Press in 2016 and is recognized by librarians and color professionals as a best selling reference. Her short course at the premiere conference on computer graphics and interactive techniques, SIGGRAPH 2019, was standing room only. Theresa-Marie has consulted with the Stanford University Visualization Group on a color suggestion prototype system, the Center for Visualization at the University of California at Davis and the Scientific Computing and Imaging Institute at the University of Utah on applying color theory to ensemble data visualization. Prior to her consulting work, Theresa-Marie founded two visualization centers: (1) the United States Environmental Protection Agency's Scientific Visualization Center and (2) the Center for Visualization and Analytics at North Carolina State University.

This Seminar and visit sponsored by:

Virginia Tech & Jefferson National Lab: The Center for Nuclear Femtography

A new collaboration is underway across the Commonwealth of Virginia as the CLAS-12 instrument comes online at Jefferson National Labs: The Center for Nuclear Femtography is a Commonwealth-funded center dedicated to imaging the heart of the atom: the nucleon. In 2018, researchers from around the state were called to the first annual Symposium on Nuclear Femtography, held in Charlottesville; later that year, a round of pilot projects were funded to build partnerships across universities and across disciplines.

As a trans-disciplinary concept, the Center for Nuclear Femtography (CNF) does not just target Virginia Physicists: such collaborations are made from experts all across the country and the world, including computer scientists, statisticians, mathematicians, and mechanical engineers to name a few. ARC faculty Nicholas Polys and Srijith Rajamohan are Co-PIs on two of these CNF pilot projects: "Visualizing Femtoscale Dynamics" and "Next-generation Visual Analysis Workspace for Multidimensional Nuclear Femtography Data". Phase One is completing with several exchange visits, meetings and seminars, and a first set of deliverables (public soon).

Dr. Markus Diefenthaler of Jefferson Labs and JooYoung Whang (ARC Intern) examine 3D histogram profiles of Pion and Kaon kinematics in the Visionarium HyperCube,
The Virginia Tech team visited Jefferson National Labs this summer to build a new collaboration, language, and tools with the Center for Nuclear Femtography.

Guest Seminar: Markus Diefenthaler, Exploring the heart of matter at Jefferson Lab

Please mark you calendar for this exciting Guest Seminar:

JLab Physicist Markus Diefenthaler

Thursday, August 29 at 2:00 pm in TORGERSEN 1100

Title Exploring the heart of matter at Jefferson Lab

Abstract
Thomas Jefferson National Accelerator Facility in Newport News, Virginia, is a U.S. Department of Energy Office of Science national laboratory. Jefferson Lab’s unique and exciting mission is to expand humankind’s knowledge of the universe by studying the fundamental building blocks of matter within the nucleus: subatomic particles known as quarks and gluons. In my seminar, I will present our science program to understand the structure of atomic nuclei directly from the dynamics of their quarks and gluons, governed by the theory of their interactions, quantum chromodynamics (QCD), and motivate how advances in theory, accelerator and detector technologies, and in particular in computer science will enable a new frontier in nuclear science.

Organized In Cooperation with Advanced Research Computing and TLOS.

Researchers use virtual reality, GIS data to enhance trail management

3D Models are increasingly valuable for safety and for scenery. ARC and CGIT collaborated to host researchers considering scenic resources inventory, including protocols and results for the Appalachian Trail:

Roanoke Time article and photos:

https://www.roanoke.com/news/education/higher_education/virginia_tech/researchers-use-virtual-reality-gis-data-to-enhance-trail-management/article_ac186080-f099-5bf9-b010-db77daef89e8.html

VT ARC hosting AI Hackathon in collaboration with OpenPOWER and IBM

VT OpenPOWER Hackathon –

April 19th through May 3rd

Description

The VT OpenPower Hackathon is a first-of-its-kind AI (artificial intelligence) Hackathon at Virginia Tech that will focus on accelerating AI model training, with the ambition to reap and share the power of AI technologies with Virginia Tech’s student community.  Prizes will be awarded to winning teams: 1st place = $2,000, 2nd place = $1,000, and 3rd place = $500!!!

The Hackathon is being organized by the OpenPOWER Foundation Academic Workgroup, in collaboration with Virginia Tech faculty from Advanced Research Computing, and the IBM Systems Client Experience Center. The event takes place between April 19 and May 3, 2019. The initiation and problem statement will be shared on Friday, April 19 (Hackathon Launch Day). Hackers will have two weeks to work independently. The Hackathon will end with power hackers submitting projects to be presented and judged on the Hackathon Awards Day, Friday, May 3, 2019.

Participants will help to create a new wave of high performing AI models that have the potential to impact all our lives.

Hackathon Theme

The theme of the Hackathon is “Accelerating AI at Scale.” The goal is to demonstrate that optimizing the time needed to train a model can significantly impact the time required to obtain results. Students will be encouraged to use an existing model they have trained on Power Systems and demonstrate how their training time can be accelerated using the latest state-of-the-art techniques on the same Power Systems to the same accuracy.

Details of the event will be provided two weeks prior to the May 3rd judging day, allowing participants to prepare and access system resources to develop their submission. The aim of the competition is to unite students from different creative fields to accelerate the training of AI model prototypes using AI tools in a limited amount of time. The competition will test the students’ fortitude and ability to work under pressure.

 

Location/Dates and Times

 

Hackathon Launch Day - Friday, April 19th, 11 AM – 2 PM, 1100 Torgersen Hall

Hackathon Awards Day - Friday, May 3rd , 10:30 AM – 2 PM, 1100 Torgersen Hall

 

 

How to prepare for the Hackathon

 

Students are recommended to select a starting Machine Learning/Deep Learning model they are working on or to select a model from other sources (github, kaggle.com, modelzoo, etc.) from which they can collect a baseline training time and test accuracy.

 

Students will evaluate ways to accelerate the training time for their model by exploring advanced techniques and the use of:

  • Multiple GPUs with a server
  • Multiple GPUs across multiple servers
  • Accelerated machine learning libraries
  • Other innovative approaches

 

Come to the Hackathon Launch Day to learn about the technologies you can apply. Be ready to put the final touches on your accelerated model to demonstrate your test accuracy and speedups at Hackathon Awards Day. We encourage the use of open datasets and the sharing of code, jupyter notebooks, etc., as part of the Hackathon’s skill- and team-building goals.

 

Technologies and Available Resources

 

Some of the more popular training acceleration tools and techniques will be covered on Launch Day and can be found in IBM PowerAI on the AC922 Cluster at VT (Huckleberry – arc.vt.edu) including:

  • Tensorflow benchmark samples
  • IBM Distributed Deep Learning
  • IBM Snap ML library
  • Nvidia RAPIDS.AI library
  • IBM Large Model Support (note the need for higher resolution datasets)
  • Frameworks optimized for POWER8

Review the PowerAI education and technical resources to get started.

 

Participant Registration

 The Hackathon is open to all Virginia Tech students. Register soon, as space is limited! To register, create a team and follow this link - https://forms.gle/oDaqjUNYG6Zhz5gC6

Please note that teams should be 2-6 members (in the spirit of fairness).

We also have an open invitation for industry experts to serve as mentors during the Hackathon, helping students refine their ideas. If you are interested in supporting this event as a mentor, please contact clarisse@us.ibm.com.  For other questions/concerns, please email rsettlag@vt.edu.

Rules

  • We encourage participation in teams of 2-6 participants to foster collaborative sharing of skills and knowledge.
  • We want to see innovative code, not just PowerPoints! However, creative presentation of the results and re-use of code is also ​important and encouraged.
  • Access to the VT Power Systems cluster resources will be provided.
  • Participants are not restricted to any AI framework, but models should be machine learning or deep learning models.
  • On various occasions throughout the event, photographs and video will be taken. By applying, participants are consenting to being photographed and filmed.

Be Excellent To Each Other: This is, by all means, the most important rule. This is a very diverse environment, made up of people from many backgrounds. Any behavior or presentation that objectifies or belittles others will be halted, with the presenters immediately disqualified and removed from the event.

Evaluation Criteria

  • Utilization of AI Technology (30 points): How well does the concept follow our theme? How well is the technology implemented? Does the project scale?
  • Impact (30 points): Is there a clear benefit for the solution? How much acceleration was achieved? Did the accuracy of the model suffer?
  • Creativity (20 points): How creative was the team in developing an innovative solution for the challenge?
  • Technical Effort (20 points): Is the project technically impressive? Complex? Does it seem remarkable that someone could achieve this acceleration as part of the Hackathon?

Prizes (All prize are in USD)

  • 1st Place: $2,000
  • 2nd Place: $1,000
  • 3rd Place: $500

Launch Day Agenda
11:30 AM- Noon:        Hackathon Launch!
Noon:                          Break for Pizza
12:30 – 2:00 PM:        Hackathon Tools you can use

Awards Day Agenda (subject to change based on # of teams)
10:00 AM:                   Coffee, Tea, and Check-In
10:30 – 11:00 AM:      Welcome and Overview
11:00 AM - Noon:       Judge /Mentor Walkthroughs and Questions
Noon:                          Sponsored Lunch
12:40 – 1:40 PM:        Final presentations (10 min max per team)
1:40 – 2:00 PM:           Hacking Ends and winners announced!

Post-Hackathon

Even though the hacking is over doesn’t mean that you shouldn’t carry on with the great work you started! We encourage you to keep going, consider becoming a mentor in future hackathons, and share your results with the community.