ARC is proud to announce the release of a new cluster called Infer, which provides 18 Intel Skylake nodes each equipped with an Nvidia T4 GPU. The cluster’s name "Infer" alludes to the AI/ML inference capabilities of the T4 GPUs derived from the "tensor cores" on these devices. We think they will also be a great all-purpose resource for researchers who are making their first forays into GPU-enabled computations of any type. For more information about the T4 architecture on T4 architecture and performance relative to, e.g., V100 GPUs, see the following pages:
Cluster details and examples are provided on our Infer page. Users may login at infer1.arc.vt.edu and try it out at their earliest convenience. For now, software installs are mostly limited to CUDA and associated toolchains; if users have additional requests please, they may submit them via Help ticket.
As part of her Environmental Design class, Professor Katie Meaney teaches about the dialogue between objects and their context. Here, a famous Bauhaus sculpture is transported (along with the users) to new locations and novel dialogues. Rendered in X3D in the Virginia Tech Visionarium Lab's 27.4 million stereo pixel HyperCube.
Tinkercliffs, ARC's most recent cluster, is online and is making news owing to its scale (more than 300 nodes and almost 42,000 CPU cores) and some of the challenges the acquisition and commisioning faced in the COVID-19 era.
As of early October, access to the cluster is provided though a consultation with an ARC computational scientist. You may reach out to one of the team directly or submit a ticket requesting a consultation via arc.vt.edu/support.
ARC's Visionarium Lab provides world-class support for Viriginia Tech researchers using Virtual and Mixed Reality Visualization. Here is a highlight from the Building and Construction Department taking advantage of the unique HyperCube instrument:
S. Hasanzadeh, N. F. Polys and J. M. De La Garza, "Presence, Mixed Reality, and Risk-Taking Behavior: A Study in Safety Interventions," in IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2020.2973055
Dr. Hasanzadeh will be joining Purdue's Civil Engineering Faculty this Fall! Congratulations!!
Abstract: Immersive environments have been successfully applied to a broad range of safety training in high-risk domains. However, very little research has used these systems to evaluate the risk-taking behavior of construction workers. In this study, we investigated the feasibility and usefulness of providing passive haptics in a mixed-reality environment to capture the risk-taking behavior of workers, identify at-risk workers, and propose injury-prevention interventions to counteract excessive risk-taking and risk-compensatory behavior. Within a mixed-reality environment in a CAVE-like display system, our subjects installed shingles on a (physical) sloped roof of a (virtual) two-story residential building on a morning in a suburban area. Through this controlled, withinsubject experimental design, we exposed each subject to three experimental conditions by manipulating the level of safety intervention. Workers' subjective reports, physiological signals, psychophysical responses, and reactionary behaviors were then considered as promising measures of Presence. The results showed that our mixed-reality environment was a suitable platform for triggering behavioral changes under different experimental conditions and for evaluating the risk perception and risk-taking behavior of workers in a risk-free setting. These results demonstrated the value of immersive technology to investigate natural human factors.
Virginia Tech and the Web3D Consortium hosted a one-day workshop to provide presentations to Naval enterprise leaders on the use of collaborative Web-based #X3D visualization techniques by Government, Academia, and Industry practitioners. Virginia Tech's Advanced Research Computing (ARC) and Center for Geospatial Information Technology (CGIT) host with proven, innovative methods for data fusion and interactive 3D visualization.
Friday, 2019, December 6 (All day): Virginia Tech Executive Briefing Center, 900 N Glebe Rd, Arlington, VA 22203. Falls Church room.
The workshop fostered networking and showcased current and emerging capabilities for enterprise-scale, networked geo-enabled 3D communications. Increasing shared understanding and technical coordination agility can be feasibly adapted to improve digital connectedness, accelerate consensus/decision making processes across systems engineering, advance planning conceptualization, and collaborative virtual rehearsal for ashore & afloat logistics activities. Building shared priorities adds value for everyone. Workshop participants discussed their reactions "around the room" together, sharing ideas about opportunities and potential synergies that can further the art of the possible to realize shared innovation.
The workshop showcased the benefits and strengths of using the Web3D Consortium’s Extensible 3D (X3D) open 3D standards for the WWW and included:
Demonstrations using 3D scanners, 3D software & 3D data storage, and processing proved compelling use cases for how the X3D standard can enable new realities of collaborative 3D visualization for you
Overview of Web3D Standards and their value proposition in the ecology of 3D data representations and file formats
Appreciation of the long-game of technology and information systems investments: born out with this open International Standard (ISO-IEC) and WWW community, including dozens of tools, pipelines, and engines.
Presentations from the workshop are available at the link below.
Virginia Tech'sAdvanced Research Computing supports computational science in all its forms across the university. As the Division of IT, we support and collaborate with our Faculty and Extension Agents across the institution. For example, supporting world-class capabilities in the technologies for 3D capture, simulation, analysis, and interactive visualization of real places over time. The use of rich data layers, computational models, and 3D rendering have ushered in a new era of research, education, and training toward smart (and effective) Geodesign.
Dr. John Munsell, Associate Professor and Forest Management Extension Specialist, teaches his FOR 4334 class with a strong project component. Students must apply the design principles and Best Management Practices (BMPs) they have learned to a real location/ site. This process involves many stakeholders and the effective communication of new spatial and agronomic concepts- a perfect opportunity for immersive, interactive 3D communication. The Fall 2019 student class project presentations were held in the HyperCube immersive projection environment in the Visionarium Lab. A video of his 2018 class in the Visionarium Lab can be seen here. Drs. Steven Kruger, Nicholas Polys, and Lance Arsenault collaborated on the data collection, processing, and immersive visualization.
The latest news includes the award of new $ 590,000 grant, led by Dr. Munsell. This award will further support our Land-Grant mission with a new age of impactful Education and Extension work for Appalachian Non-Timber Forest Products (NTFPs).
"The funding will continue federal support of the Appalachian Beginning Forest Farmer Coalition (ABFFC), which is based at Virginia Tech and was initiated in 2015. The project receives funding from a variety of sources in addition to the federal government, and which to date totals $1.5 million including the latest grant via a program of the United States Department of Agriculture. " - https://news.mongabay.com/
Polys NF, Sforza P, Hession WC, Munsell J. Extensible experiences: fusality for stream and field. InProceedings of the 21st International Conference on Web3D Technology 2016 Jul 22 (pp. 179-180). ACM.
Polys N, Hotter J, Lanier M, Purcell L, Wolf J, Hession WC, Sforza P, Ivory JD. Finding frogs: using game-based learning to increase environmental awareness. InProceedings of the 22nd International Conference on 3D Web Technology 2017 Jun 5 (p. 10). ACM.
Polys N, Newcomb C, Schenk T, Skuzinski T, Dunay D. The value of 3D models and immersive technology in planning urban density. InProceedings of the 23rd International ACM Conference on 3D Web Technology 2018 Jun 20 (p. 13). ACM.
Dr. Srijith Rajamohan and Alana Romanella are working on a deep-learning based interactive visualization tool to understand and plot political ideologies based on Twitter activity. They have been collecting data over past four months, analyzing roughly 3 millions tweets based on certain hash tags. The goal of their work is to construct a visualization tool that can help identify political ideology.
Color is fundamental component of the world around us and our experience. It plays a key role in the understanding of our environment, from science and engineering to design and art ... but how does it work ? And, what makes digital color different?
This September, renown color and visualization expert Theresa-Marie Rhyne will visit Virginia Tech to share her perspective on the theory and practice of color. With years of experience building notable visualization programs and as the author of the recent book, Applying Color Theory to Digital Media and Visualization, Theresa-Marie will visit with faculty and students as well as give a lecture to a graduate Computer Science class and to the broader University Community. Her talk is scheduled for 1 pm Monday Sept 23rd in Torgersen Hall 1100.
“Color Fundamentals for Digital Content Creation & Visualization"
Abstract: We provide an overview of the fundamentals of color theory and approaches to color selection for visualization and exploration. Our journey includes the introduction to the concepts of color models and harmony, a review of color vision principles, the defining of color gamut, spaces and systems, and highlighting online and mobile apps for performing color analyses of digital media. We feature concepts from art and design such as extending the fundamentals of the Bauhaus into digital media as well as review color perception and appearance principals from vision and visualization researchers and practitioners. Newly emerging trends in automated color selection and deep learning colorization are also noted.
About the Instructor: Theresa-Marie Rhyne is an expert in the field of computer-generated visualization and a consultant who specializes in applying artistic color theories to visualization and digital media. Her book on "Applying Color Theory to Digital Media and Visualization" was published by CRC Press in 2016 and is recognized by librarians and color professionals as a best selling reference. Her short course at the premiere conference on computer graphics and interactive techniques, SIGGRAPH 2019, was standing room only. Theresa-Marie has consulted with the Stanford University Visualization Group on a color suggestion prototype system, the Center for Visualization at the University of California at Davis and the Scientific Computing and Imaging Institute at the University of Utah on applying color theory to ensemble data visualization. Prior to her consulting work, Theresa-Marie founded two visualization centers: (1) the United States Environmental Protection Agency's Scientific Visualization Center and (2) the Center for Visualization and Analytics at North Carolina State University.
A new collaboration is underway across the Commonwealth of Virginia as the CLAS-12 instrument comes online at Jefferson National Labs: The Center for Nuclear Femtography is a Commonwealth-funded center dedicated to imaging the heart of the atom: the nucleon. In 2018, researchers from around the state were called to the first annual Symposium on Nuclear Femtography, held in Charlottesville; later that year, a round of pilot projects were funded to build partnerships across universities and across disciplines.
As a trans-disciplinary concept, the Center for Nuclear Femtography (CNF) does not just target Virginia Physicists: such collaborations are made from experts all across the country and the world, including computer scientists, statisticians, mathematicians, and mechanical engineers to name a few. ARC faculty Nicholas Polys and Srijith Rajamohan are Co-PIs on two of these CNF pilot projects: "Visualizing Femtoscale Dynamics" and "Next-generation Visual Analysis Workspace for Multidimensional Nuclear Femtography Data". Phase One is completing with several exchange visits, meetings and seminars, and a first set of deliverables (public soon).