Tag: 苏州龙凤419

ppbsbqvx

People On The Move 53013

first_imgPoetry has promoted Don Share to editor. Share was serving as a senior editor at the magazine.The Hearst Corporation has named Lincoln Millstein as a senior vice president and special assistant to the CEO. Millstein was serving as an executive vice president and deputy group head at Hearst Newspapers.Drea Bernardi has been named director of content development at Magnet Media. Bernaldi was previously a production coordinator for Mario Batali. Also, Paul Kontonis is now general manager. Kontonis was a vice president and group director of brand content at The Third Act.Modern Luxury Interiors has tapped Drew Limsky as its editor-in-chief. Limsky joins the publication from Mariner, where he was also editor-in-chief. Devin Tomb is now an associate lifestyle editor at SELF. Tomb was formerly an associate editor at Seventeen. And Deirdre Daly-Markowski was named intergrated digital director. She was previously corporate partnership director at Conde Nast Media Group.Real Simple promoted Lindsay Hunt to associate food editor. Hunt was previously serving as an assistant food editor.Time Inc. Branded Solutions has named Tom Kirwan vice president of digital sales. Kirwan was an associate publisher for the company’s entertainment group.last_img read more

CONTINUE READING
nzchspbr

CNET Asks Do you want a wearable smartphone

first_imgSmartphone design just got taken to another level. Nubia, an associate company to Chinese telecommunications company ZTE, just unveiled the world’s first wearable smart phone at MWC 2019. The smartphone, (or is it a smartwatch?), named the Nubia Alpha, looks like something out of a sci-fi novel, and it’s a first step toward the wearable tech of the future. The design is a bit cumbersome, the gold a bit gaudy, but the execution is commendable. You can make and receive phone calls, take pictures, and decide to control it either with your fingers or a series of hand gestures–all from the water resistant band around your wrist. Check out more details on the design and functionality of it here. Nubia hasn’t announced a release date or set a price, but it has stated the phone will be available for purchase, it’s not just a concept.  The Nubia Alpha looks like either a house arrest bracelet or Batman’s phone CNET Asks 2:47 Tags Now playing: Watch this: Share your voice Comments The Nubia Alpha wraps a phone around your wrist We have seen concepts and prototypes of wearable phones before. And of course, the smartwatch accompaniments to phones, like the Apple Watch, but Nubia seems to be the first in the game to actually sell a standalone wearable phone. This very well may be the phone of the future, but is the Nubia Alpha the one to propel us there? Are you even interested in wearable phones such as the this? We have questions like this in the poll below, and we would love to gauge your reaction to this phone. If you feel like explaining a bit more, hop on over to the comment section and let us know your opinion. Can’t wait to see your responses.Check out previous installments of CNET Asks here, and cast your votes on a wide range of topics. If there is a particular question you’d like to see asked, or if you’d like a shot at being featured in a future edition, join us at CNET Member Asks and submit your topic idea.Not seeing the poll below? Click here to see poll 8 Photos 3 Wearable Tech Gadgets Mobile Accessories Sci-Tech Phoneslast_img read more

CONTINUE READING
zjqfopfy

Building VR experiences with React VR 20 How to create maze thats

first_imgIn today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed.However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you’ll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time.This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers.A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it’ll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn’t shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we’ll need a basic application created. Please go to your VR directory and create an application called ‘WalkInAMaze’:react-vr init WalkInAMazeAlmost random–pseudo random number generatorsTo have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren’t completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn’t completely random, someone could break your code.We aren’t so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won’t generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn’t for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.)Including library code from other projectsUp to this point, I’ve shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you’ve issued the include statement in.With a browser, the concept of “including” JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files:var MersenneTwister = require(‘mersenne-twister’);Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed);From this point on, you just call rng.random() instead of Math.random().We can just use npm install and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command:npm install mersenne-twister –saveRemember, the –save command to update our manifest in the project. While we are at it, we can install another package we’ll need later:npm install react-vr-gaze-button –saveNow that we have a good random number generator, let’s use it to complicate our world.The Maze render()How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world.There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don’t really represent exactly how wide it is.First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze.The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this:x1xxxxxxxx x xxxx x x xx x x xx xxxxx xx x x xx x x x xx x 2xxxxxxxxxIt is a little hard to read, but you can count the squares vs. xs. Don’t worry, it’s going to look a lot fancier. Now that we have the HTML version of a Maze working, we’ll start building the hedges.This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it’s long. You can find a link for the source at: http://bit.ly/VR_Chap11Adding the floors and type checkingOne of the things that look odd with a 360 Pano background, as we’ve talked about before, is that you can seem to “float” against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway.For this version, let’s import a ground square. We could use a large square that would encompass the entire Maze; we’d then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it’s “underneath” every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever.To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside ‘Maze.js‘ we added the following code:import Hedge from ‘./Hedge.js’;import Floor from ‘./Hedge.js’;import Gem from ‘./Gem.js’;Then, inside the Maze.js we could instantiate our floor with the code:Notice that we don’t use ‘vr/components/Hedge.js‘ when we do the import; we’re inside Maze.js. However, in index.vr.js to include the Maze, we do need:import Maze from ‘./vr/components/Maze.js’;It’s slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push();Here is where I ran into two big problems, and I’ll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest).This is an easy fix. Make sure that you code import Floor from ‘./floor.js’; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them.The second problem I had was more of a simple goof that is easy to occur if you aren’t really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did:Inside the maze.js file, I had code like this:for (var j = 0; j After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be ‘4’ ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of ‘4’, and gets the 42 string, then converts it to an integer and assigns this to j.When this is done, very weird things happened.When we were building the Space Gallery, we could easily use the ‘5.1’ values for the input to the box:Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ]React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won’t get away with this.Remember that your code isn’t “really” JavaScript. It’s processed. At the heart, this processing is fairly simple, but the implications can be a killer.Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn’t intend.You are the programmer. Program correctly.So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let’s take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files):import React, { Component } from ‘react’;import { asset, Box, Model, Text, View} from ‘react-vr’;export default class Gem extends Component {constructor() {super();this.state = {Height: -3 };}render() {return ();}}CopyThe Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.)To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you’ve created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around – except, there is a bug. This is what the maze should look like (if you use the file  ‘MazeHedges2DoubleSided.gltf‘ in Hedge.js, in the statement):>Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files.Using the glTF file format for modelsglTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn’t an option. Many other graphics shaders that modern hardware can handle can’t be described with the OBJ file format.This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export.This is however on interacting with the world, so I’ll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models.The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don’t have to worry that the export won’t be quite right. Today’s physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file’s specular lighting model. Here is the metallic-looking Gem that I’m using as the gaze point:Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on.I didn’t use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture.To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps:Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender.In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into “NodeTree” and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials.Choose the object you are going to export; make sure that you choose the Cycles renderer.Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked.Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness.Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram.To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows:Now, to include the exported glTF object, use the component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the : To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier).I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components).If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths.We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn’t change chaotically.To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR. Read More:Google Daydream powered Lenovo Mirage solo hits the marketGoogle open sources Seurat to bring high precision graphics to Mobile VROculus Go, the first stand alone VR headset arrives!last_img read more

CONTINUE READING
ysnpzyhr

Tackle trolls with Machine Learning bots Filtering out inappropriate content just got

first_imgThe most feared online entities in the present day are trolls. Trolls, a fearsome bunch of fake or pseudo online profiles, tend to attack online users, mostly celebrities, sports person or political profiles using a wide range of methods. One of these methods is to post obscene or NSFW (Not Safe For Work) content on your profile or website where User Generated Content (USG) is allowed. This can create unnecessary attention and cause legal troubles for you too. The traditional way out is to get a moderator (or a team of them). Let all the USGs pass through this moderation system. This is a sustainable solution for a small platform. But if you are running a large scale app, say a publishing app where you publish one hundred stories a day, and the success of these stories depend on the user interaction with them, then this model of manual moderation becomes unsustainable. More the number of USGs, more is the turn-around time, larger the moderation team size. This results in escalating costs, for a purpose that’s not contributing to your business growth in any manner. That’s where Machine Learning could help. Machine Learning algorithms that can scan images and content for possible abusive or adult content is a better solution that manual moderation. Tech giants like Microsoft, Google, Amazon have a ready solution for this. These companies have created APIs which are commercially available for developers. You can incorporate these APIs in your application to weed out the filth served by the trolls. The different APIs available for this purpose are Microsoft moderation, Google Vision, AWS Rekognition & Clarifai. Dataturks have made a comparative study on using these APIs on one particular dataset to measure their efficiency. They used a YACVID dataset with 180 images, manually labelled 90 of these images as nude and the rest as non-nude. The dataset was then fed to the 4 APIs mentioned above, their efficiency was tested based on the following parameters. True Positive (TP): Given a safe photo, the API correctly says so False Positive (FP): Given an explicit photo but the API incorrectly classifies it as safe. False negative (FN): Given a safe photo but the API is not able to detect so and True negative(TN): Given an explicit photo and the API correctly says so. TP and TN are two cases which meant the system behaved correctly. An FP meant that the app was vulnerable to attacks from trolls, FN meant the efficiency of the systems were low and hence not practically viable. 10% of the cases would be such that the API can’t decide whether its explicit or not. Those would be sent for manual moderation. This would bring down the maintenance cost of the moderation team. The results that they received are shown below: Source: Dataturks As it is evident from the above table, the best standalone API is Google vision with a 99% accuracy and 94% recall value. Recall value implies that if the same images are repeated, it can recognize them with 94% precision. The best results however were received with the combination of Microsoft and Google. The comparison of the response times are mentioned below: Source: dataturks The response time might have been affected with the fact that all the images accessed by the APIs were stored in Amazon S3. Hence AWS API might have had an unfair advantage on the response time. The timings were noted for 180 image calls per API. The cost is the lowest for AWS Rekognition – $1 for 1000 calls to the API. It’s $1.2 for Clarifai, $1.5 for both Microsoft and Google. The one notable drawback of the Amazon API was that the images had to be stored as S3 objects, or converted into that. All the other APIs accepted any web links as possible source of images. What this study says is that the power of filtering out negative and explicit content in your app is much easier now. You might still have to have a small team of moderators, but their jobs will be made a lot easier with the ML models implemented in these APIs. Machine Learning is paving the way for us to be safe from the increasing menace of Trolls, a threat to free speech and open sharing of ideas which were the founding stones of internet and the world wide web as a whole. Will this discourage Trolls from continuing their slandering or will it create a counter system to bypass the APIs and checks? We can only know in time. Read Next Facebook launches a 6-part Machine Learning video series Google’s new facial recognition patent uses your social network to identify you! Microsoft’s Brad Smith calls for facial recognition technology to be regulatedlast_img read more

CONTINUE READING
ogyhkefr

Shadow Robot joins Avatar X program to bring realworld avatars into space

first_imgShadow Robots Company, experts at grasping and manipulation for robotic hands announced that they are joining a new space avatar program named AVATAR X. This program is led by ANA HOLDINGS INC. (ANA HD) and Japan Aerospace Exploration Agency (JAXA). AVATAR X aims to accelerate the integration of technologies such as robotics, haptics and Artificial Intelligence ( AI), to enable humans to remotely build camps on the Moon, support long-term space missions, and further explore space from Earth. In order to make this possible, Shadow will work closely with the programme’s partners, leveraging its unique teleoperation system that it has already developed and that is also available to purchase. AVATAR X is all set to be launched as a multi-phase programme. It aims to revolutionize space development and make living on the Moon, Mars and beyond, a reality. What will AVATAR X program include? AVATAR X program will comprise of clever elements including Shadow’s Dexterous Hand, which can be controlled by a CyberGlove worn by the operator. This hand will be attached to a UR10 robot arm controllable by a PhaseSpace motion capture tool worn on the operator’s wrist. Both the CyberGlove and Motion Capture wrist tool have mapping capability so that the Dexterous Hand and the robot arm can mimic an operator’s movements. The new system allows remote control of robotic technologies while providing distance and safety. Furthermore, Shadow uses an open source platform providing full access to the code to help users develop the software for their own specific needs. Shadow’s Managing Director, Rich Walker says “We’re really excited to be working with ANA HD and JAXA on the AVATAR X programme and it gives us the perfect opportunity to demonstrate how our robotics technology can be leveraged for avatar or teleoperation scenarios away from UK soil, deep into space. We want everyone to feel involved at such a transformative time in teleoperation capabilities and encourage all those interested to enter the AVATAR XPRIZE competition.” To know more about AVATAR X in detail, visit ANA Group’s press release. Read Next Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the makinglast_img read more

CONTINUE READING