Imagine you’re starting a journey. You strap into the exit row to enjoy a sweet listen to Miley Cyrus’ latest when the flight attendant pokes out your earplugs and says, “Do you realize you’re sitting in an exit row … ?” and so on.
And then: “I need a verbal yes.”
Not a nod. Not a thumbs up. Not some shrug and repo of the plugs. Nothing less than a whole-hearted, full-throated, mano a mano, “Yes, sir, I understand.” In English, no less.
Now imagine if digital marketers demanded the same level of informed consent when asking us to opt into their various outrages. Targeting as we know it would grind to a halt — and mobile marketing would end.
Most mobile apps ask us kindly to agree to all kinds of data sharing up front. We do, assuming the app won’t work if we don’t (it will). One of these consents may be our location. We use the app and move on to another. Did you realize that first app is probably still running and collecting location data, transmitting it back to advertiser HQ? No?! But you “consented.”
Now take the question of cross-device identification. This is a hot topic, and it comes up daily in conversations with startups and established players, many of whom have active and well-funded efforts under way to ride this dragon down.
What is cross-device identification? Consider how most of us engage with the online world: we have a browser at work, a different browser or two at home, a smartphone or two, a tablet, an X-Box, maybe a connected TV, all of which use the Internet protocol to connect to the same potential marketers … and all of which are owned by me. Trouble is, to the web world, these “devices” all look like different people. Who’s to say they aren’t?
In a world of one-to-one marketing, where we expect our favorite sites and brands and publications to recognize us — “It’s me, your old pal, Marty!” — the difficulty of assembling a unified picture of “me” across my devices is a marketing mosh pit.
Mobile makes it worse. Apple notoriously rejects third-party cookies, and mobile apps don’t lend themselves easily to casual tags. I wrote a blog post last year identifying cross-device identification as a problem but offering no solutions. Amazing what a year can do.
Whether we realize it or not, the market is triangulating on a solution that renders our “device graph” generally available to outsiders. We pass no judgement here. This work is (for the most part) entirely legal and at least nominally under consumer control. We opted in. The need of marketers to know us is not necessarily at odds with our desire to be known.
How is it being done?
Each mobile device with a carrier has a unique ID. If we visit a website or see an ad on our device, that site or ad (through a pixel) receives our device ID, although not our identity. Carriers (and the NSA) know who we are, but have been reluctant to wade into the murky waters of ad targeting (until recently).
Apple provides an identifier called IDFA that maps its devices to people. Others, including Google’s Android, use AdID. These identifiers are available for use, for example, by app developers who are selling advertising, but access is restricted by Apple and others.
Apart from these structural IDs, there are various other methods marketers use to try to stitch device graphs together. From most to least accurate, they are:
* Deterministic: This is a euphemism for what the cops call a positive ID. It is available when a person authenticates herself on different devices and browsers. For example, if I log in to Delta.com and the Delta iPhone and iPad apps using the same username, Delta knows those devices are mine. The advantage Facebook, Google, LinkedIn, eBay, Amazon and other mega-communities have here is obvious. Many smaller DMPs such as Lotame, and tag management systems, such as Ensighten, offer this feature.
* Probabilistic: This is a more complex process that uses multiple data points to determine likely statistical matches among devices, providing a higher or lower probability they belong to the same person. Common data used here are IP address, browser configurations (fonts, headers, plugins), sites visited, time of day and location. DMPs with large data sets such as Oracle’s BlueKai, and startups such as BlueKava, do this. Accuracy ranges from 50-80%, depending on who you’re asking.
* Householding: The most common method, widely offered by ad tech companies such as Collective, links devices using the IP address transmitted as part of the communication protocol. Since many of us connect all our devices at home through the same wifi router, this is a useful way to link them — to households, not people. A subset of the probabilistic method above, it is also less accurate.
Obviously, the ideal scenario would be a huge data table somewhere that maps a set of device IDs and browser cookies to a single individual. And — of course — there are companies doing just that. Facebook and Google’s Universal Analytics/Chrome/Android aside, the most notable players here are Neustar, Drawbridge and Tapad. The latter has accumulated a database of about 1.2 billion devices and relationships, pieced together using a complex process that relies on all the above techniques, and more. (Neustar uses deterministic methods.)
In fact, Tapad itself is the white label provider running behind many DMPs, attribution platforms, ad targeting companies, and data providers. It has run behind the aforementioned BlueKai, Google’s Adometry, VisualIQ, Exelate, Datalogix, and others.
Recent history shows us that the amnesia of mobile, its inherent identity protection, is a fast-fading phenomenon. Marketers are finding us quickly. Tapad’s founder told me, “Our product was very hard to build, but it’s very easy to sell.”
Don’t blame the mobile marketers. We opted in.
Performance is important because you cannot buy time. If you want to see our position on Solid State Arrays, our name for All Flash Arrays, then have a look at these. Some interesting things came out of these reports, there is an interesting inversely proportional relationship between the notes, the SSA’s are used differently from the way that most believe, SDS is not a concern and customers like the new price and licensing models. For those who like these things, or for those who do not but need to know, here is the state of solid state; Magic Quadrant for Solid State Arrays and Critical Capabilities for Solid State Arrays.
No pictures this time, no comment.
I visited a client the other day and they wanted to talk about data lakes. Someone at the client, not at the meeting, had been promoting the concept of a data lake as an answer to question we explored. Before I tell you what happened, let me update you on my ?opening? position.
A few weeks ago my colleague Nick Heudecker and I published a note (See The Data Lake Fallacy: All Water and No Substance) on data lakes. The note called out what appeared to be missing from the vendor hype related to data lakes, that being the lack of any sustaining practice (or technology) to help any value persistence from re-use of the data in the lake. There IS value in mining information in a lake. But to assume that the IP and structure used to expose that insight and value persists in the data lake is wrong. A data lake does not persist that. In the jargon, ?no information governance, no sustainable or repeatable value?. It seems to be good advice.
Not everyone agrees. Another colleague of mine brought this InfoWorld ?review? by a ?strategic developer? to my attention ? see ?Gartner gets the ?data lake? concept all wrong?. It seems we said that data lakes are not useful, and that somehow a large scale, enterprise wide, wall to wall governance effort is required. Apparently we were also touting proprietary technology. Since we don?t support either perspective (devoid of context, and data lakes is not sufficient in either case) I don?t even feel the need to respond. If there had been a response to the main fallacy we call out, I would have. Truth is, if you don?t maintain any structure in the data you use, how on earth can someone that follows you get a leg up, and avoid repeating your effort? Either way the hype around data lakes continues apace.
So let?s go back to the meeting this week with the client.
This client has several established data warehouses, each with some successful if local information governance supporting analytics. The client had 17 or so data centers, each supporting one of these data warehouses. The business uses these 17 systems a lot and gets value from the data- they rely on what they get from them.
There was one question: can we use a data lake? However we had to drill down to the REAL questions behind what was being asked. There were two real questions/desires:
In truth this client wants to consolidate data centers, and quite separately adopt a focused information governance program to sustain common data spanning and connecting the local insights for additional value. As far as I can tell, a data lake plays no role in either question. Yet it was being pushed by a vendor to one of the end users at this client.
The end-user even spotted the fallacy themselves. They asked, ?If we used a data lake, don’t we actually take steps backwards, in that we ‘lose’ all those currently silod yet effective IP and governance frameworks?
YES! A data lake by definition has a zero barrier to entry and so supports zero information governance. Any and all data is accepted because it has no need to confirm or relate to the rest of the data that exists in the lake already. If there IS a cost to enter, it is not a data lake. In contrast, a data warehouse or EDW has a higher barrier to entry. So why not go for a balance? In this case the user was right. A data lake would be a step backward. .
So why was data lake being referenced? Perhaps this vendor is selling a form of data warehouse but wanting to use the new silvery bullet-like name. My final recommendation to the client: forget the new names. Identify the real requirement (data center consolidation, and multi-warehouse information governance) and design the target architecture. If you really want a name for it, let?s chat again. But don?t use ?data lake? since it does not seem to fit.
Today?s headlines report that big banks have been hit by cyberattacks, according to the FBI. While this news is alarming, it certainly is not surprising.
Hackers are always probing bank systems and even a year ago or so, law enforcement authorities and regulators put out an advisory to banks about criminals hacking into bank employee accounts to infiltrate their computer networks, and in some selected cases to steal funds.
Frankly, this isn?t new news ? it?s just the culmination of old news. I imagine that the authorities and security staff never were able to eliminate the hackers from their systems. They have probably been in there for years, and there have probably been multiple actors, ranging from financial hackers to state sponsored cyberspies.
Wake Up Call
But this should serve as a loud wakeup call for bank Boards to elevate security to the top of their agenda, and to make sure their security staff (e.g. the CISO) are doing everything they can to secure the business. They also need to make sure the CISO and IT staff have the business support they need to make it all happen.
Organizational issues ? as opposed to the technology issues — are generally the main impediments to successful defense of the bank?s assets. Organizations need to be aligned in order to properly defend themselves from cyber-attacks. Senior and board level management need to support security initiatives directly by getting involved, and not just leaving it to the CIO or CISO to figure out. These IT and IS executives can?t do their jobs without business support. And that has to come from the board level, given the siloed nature of these large bank organizations.
What’s the Damage?
While this is cause for alarm, in a sense we should all be prepared for this. When it comes to financial assets being stolen, the banks have strong safeguards in place and can shut down wire and money transfer systems if they need to before too much damage is done. So, for example, some unauthorized money transfers could certainly take place, but they would be limited in number if the criminals attempted a mass attack against the money transfer systems. (Of course the stock market would have an extreme negative reaction if this occurred – hopefully that would be short lived).
As far as the data ? it?s safe to say we must assume all our financial information is subject to theft, as are simple credentials such as passwords. That certainly is not a good situation and banks, intel agencies and other enterprises must do a better job at protecting sensitive data. But I see a lot more money spent on preventing the USE of stolen data than I do on preventing the theft of the data itself – for simple economic reasons, i.e. the use of stolen data directly affects the company’s bottom line. The theft of data generally doesn’t have that impact unless it’s disclosed to the public since the stolen data is generally used at another enterprise.
Most large financial institutions have spent considerable sums on fraud detection systems that prevent the use of stolen data. They are certainly not perfect, but they do catch the majority of fraud attempts. It?s the small financial institutions and their third party processors that we should be worried about because they are not securing their systems as well as they should be.
So while it makes me nervous that this is happening, I do believe the large financial services companies can protect their and our financial assets such that a massive robbery cannot take place. And as noted it?s safe to assume information is no longer confidential and we just have to compensate for that by preventing the use of stolen information for illicit purposes. It?s just the new world order.
As a Gartner analyst, I am fortunate to frequently meet amazing people. Qaizar Hassonjee from Adidas is not only one of them, but one of the most memorable ones among the amazing people. He is at the heart of miCoach, including miCoach Elite, the system developed in partnerships with the top soccer players, coaches and teams of the world where soccer is known as football. For instance, German national team was practicing all last year with miCoach.
We invited Qaizar Hassonjee to talk at our Catalyst conference earlier this month, and he accepted our invitation! I was tweeting like crazy, ?Everyone, drop everything, go to End-User Case Study: Smart Soccer With adidas miCoach Elite Team System!? This session is recorded by Gartner Events On Demand, which offers analyst and guest speaker presentations from all our conferences, woo-hoo!
Qaizar Hassonjee is a passionate leader who knows how to focus and what to focus on. He leads fantastic innovations, like creation of a sensor t-shirt to monitor an athlete?s heart rate and performance. And this sensor t-shirt is washable! I am writing this blog post, because Qaizar Hassonjee and his team got big data right. Here is the Gartner’s definition of big data (which I explained in the past):
Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.
This is how the big data definition plays out in digital sports.
Part 1. High-volume, high-velocity and high-variety information assets.
This is a screenshot of adidas VP of Innovation Qaizar Hassonjee’s talk at Catalyst
miCoach collects players’ heart rate, physiological parameters, geolocation and much more in real time, with a lot of unexpected uses of data. For example, a location heat map was important to people who maintained the field.
Part 2 of the definition. Information assets that demand cost-effective, innovative forms of information processing.
The miCoach team was focusing on serving the right analytics at the right time. They did not make typical mistakes of relying exclusively on their own expertise, but involved cardiologists, physiologists, equipment managers, and of course, coaches and players.
And finally, part 3 of the definition: Information processing for enhanced insight and decision making.
These are the main points that led to success of miCoach because of the big data insights:
Follow Svetlana on Twitter @Sve_Sic
Most BCMP tools meet customer needs for recovery plan management, and a consistent and repeatable plan development process. The growing focus on BCM program analytics and integration into the operational risk management initiative has resulted in increased sophistication of BCMP tools. Read it here: http://www.gartner.com/document/2833119?ref=shareSummary.
Private healthcare providers are expanding across the world. In Asia, although many governments promise state aid for healthcare, the ground reality (See: Market Insight: Healthcare Provider Industry Primer, 2013) is very different (except for arguably the advanced healthcare system in the city state of Singapore). India has focused her attention on providing primary care and has mostly let the private enterprise build secondary & tertiary care (hospitals & super speciality centres set up by likes of Apollo, Fortis, Columbia Asia etc) set ups. But even the primary care set up is perennially plagued by the absence of qualified doctors and staff (something that Telemedicine can partly solve) and is an area that the private sector is now closely exploring (starting from Large Tier 1 & Tier 2 cities). China on the other hand, did invest heavily in healthcare machinery with yearly investments in excess of $ 110 Mn since late 2000s. That investments did create healthcare apparatus (in form of care centres, machinery, equipment etc) , but could not help improve the healthcare quality indicators. The Chinese communist party in the latest 3rd Plenum has now formally invited private sector to help deliver healthcare by allowing increased foreign ownership in Chinese hospitals. No wonder, many private care operators see increased opportunities in the area. The Private Equity Group TPG recently tied up with Focus Group to take over a prominent Chinese Hospital operator Chindex International (listed on the American stock exchange). Fosun Group has ambitious plans to invest in 100s of more hospitals in China just like the multimillion dollar investments planned by India?s Apollo Group.
Within Europe too, more than 35% of all hospital beds in France are now in private care with prominent groups like Generale-de Sante, Medi-Partenaire and Ramsay Healthcare. In Germany, the four big private hospitals hold more than 10% of the total beds. In the U.K., large private care operators like BMI, Spire, Hospital Corp of America, Nuffield health, Ramsey etc are already active and their market share is expected to grow. For ex: Circle Health took over running of Hinchingbrooke Hospital, Serco already runs the community care in Suffolk, Virgin Health is involved in NHS Surrey. Some NHS trusts running in red (with the financial pressure expected to grow over time) are looking for private sector expertise or a ?business partner? from the NHS itself. In UK GP practices also face ?Ofstead? style ratings and under-performing ones will be under pressure to either close or improve performance radically: something where private sector expertise will help. (See: Market Insight: Changes in England’s National Health Service Create New Market Opportunities)
In a nutsell, Private Care is growing all over the world.
Private sector is leveraging technology internationally to achieve success. For ex: Spire healthcare hospital is trialling a wearable technology by SensiumVitals for monitoring vital signs; Circle Healthcare in the UK uses smart phones for doctor consultations etc etc
Leveraging Information Technology & Operational Technology, private providers can build a differentiation; and an area which will present large opportunities for the tech industry.
What do you think?
VMworld 2014 took place in San Francisco, CA this week, with ~22,000 attendees descending for the annual event that showcases VMware?s newest announcements and product/service advancements. I attended once again to pay particular attention to VMware?s cloud movements. Here are a few of my impressions.
Prior to VMworld, VMware announced a rebranding of vCloud Hybrid Service (VMware?s public cloud offering) to vCloud Air. A name is just a name, so I don?t really mind this change and in fact I think it is a simpler brand name. And simple is good.
Most important for vCloud Air is the future direction. vCloud Air is now 1 year old and Gartner clients routinely tell me that in its current state it does not stack up well in feature set to other public CSPs, namely Amazon Web Services or Microsoft Azure. However, there continues to be incredible interest in the future of vCloud Air from the loyal VMware customer base. But where is it going and how fast?
VMware announced several major service expansions including an on-demand pricing model, an object storage service, a database as a service (DBaaS) offering, and a relationship with AT&T for NetBond connectivity into vCloud Air. There were many more announcements surrounding vCloud Air.
VMware leadership informed me that they don?t intend to get into a feature-by-feature war with other major cloud providers and will rather focus on use case differentiation. DRaaS and DaaS were mentioned multiple times in multiple venues as examples of use case differentiation.
I believe vCloud Air will actually have to do both ? compete on feature and differentiate on use cases. Right now features are king in the Iaas and PaaS markets and customers will have decreasing tolerance for a non-competitive feature set. However, I think the innovation engine within vCloud Air is starting to move and the next 12 months will be fascinating. I also suspect VMware might have some tricks up their sleeves and look to differentiate in non-technical features that relate well to large enterprise with significant VMware investments.
Each of the services mentioned above are now baseline and mandatory features that all major clouds must have ? so in many ways, vCloud Air is still far behind. Furthermore, these announcement services are just now in beta and will not move into GA for another quarter or two. Unfortunately, there were no announcements around pricing of these services and VMware will need to be careful here to not price themselves out of fierce competition.
Enterprises are quickly moving from tactical to strategic selections of IaaS and PaaS providers and these decisions are often made on current features and future roadmap. It?s not too late for vCloud Air but the clock ticks fast in this market. VMware will need to aggressively move these services from beta to GA and expand into other important features such as auto scaling, advanced auditing/logging and identity and access management.
vRealize Air Automation
Another common complaint I hear from customers that evaluate vCloud Air is the fact that significant sets of features are only available if you are running the on-premises vCloud Suite - vCloud Automation Center (vCAC) and vCenter Operations Manager (vCOps) being the two significant packages. Unfortunately a lot of the VMware customers that would benefit from vCloud Air are not paying the hefty license fees to operate and run vCAC and vCOps. Therefore, when these customers evaluate vCloud Air, the automation, management and monitoring functions of the native vCloud Air interface is less than impressive.
VMware announced that their management suite is now rebranded to ?vRealize? and a new SaaS-based version of the suite would be rolled out named vRealize Air Automation. This is a big deal for future vCloud Air adoption because it no longer means that customers must run vCAC or other components internally ? which limited a lot of the vCloud Air use to very large VMware customers. Those same very large VMware customers also tend to have robust environments and large datacenters ? thereby potentially not yet needing vCloud Air. A SaaS-based solution for the vRealize Suite will open the door to many new vCloud Air customers ? namely those without the on-premises tools but also will allow VMware to innovate on a feature set roadmap faster than they can do in a shipping product with major and minor version releases.
I expect the vRealize Air Automation solution to start to look and function more similar to the management consoles of AWS, Azure, Google Cloud Platform or IBM Softlayer. And it will also likely go much further and start to compete with popular SaaS-based Cloud Management Brokers like, CSC ServiceMesh, Dell Cloud Manager or Rightscale. According to customers, VMware is a very good management company and management is one of the stickiest parts to continue to leverage VMware technologies.
We do not yet know about the pricing of vRealize Air Automation, nor what the current feature set / roadmap looks like. This is slightly disappointing but not all that unexpected from typical major conference announcements where the announcement generates buzz and the details filter out afterward. I will be paying very cloud attention to this because I believe management is the emerging ugly issue in public cloud services.
VMware Integrated OpenStack (VIO)
One of the lighter announcements on details is also one that I am most intrigued about. VMware announced VIO, essentially a VMware-based distribution for OpenStack. Right now, VIO basically just exposes OpenStack APIs into VMware infrastructure. But it holds longer-term promise. OpenStack is still plagued by installation problems, vendor support and management. But when you consider the large VMware install base within organizations, with a lot of untapped capacity, organizations may want a shortcut to convert some of that into an OpenStack cloud. This is where VMware could do quite well. VMware might be able to deliver this simplicity, but in its current iteration, I think it is still far from that. More importantly, I believe, is the potential for VMware to bring great management to an OpenStack solution.
Some industry experts want you to believe that you don?t need to manage a cloud. That is far from the truth, there is a lot of management necessary, you just manage different ?things?. For example, consider something like auto scaling. You may need to manage VMs less than in a traditional architecture but you?ll have to manage the auto scaling group, the policies assigned to it and configuration and change of such policies. If VMware focuses hard on all the difficult management aspects to OpenStack, VIO has legs.
Docker and Kubernetes Collaboration
Although not a new technology, containers are all the rage in 2014 and will continue so in 2015. The hype in the industry has been that containers will replace VMs and VMware will be severely impacted. Well VMware counteracted this hype strongly at VMworld with an announcement of Docker and Kubernetes collaboration and contribution. I?ve always thought that there is room for containers and VMs to live together for the next several years. I see value in two layers of encapsulation, one at the OS (VM) and one at the app (container) and we cannot ignore the enterprise readiness of VM security and VM management tools. Container management and security still needs improvement so why not combine the two worlds?
This announcement is a very proactive move by VMware. The leadership clearly sees the value in containers and might even admit that far into the future VMs could be at risk. Well if that happens, it now looks like VMware is setup to adjust accordingly. If container management and Kubernetes functionality is integrated into existing VMware management tools, consider the future vision of managing both VMs and containers from a single pane of glass both in a hybrid (VM and container) world or in a transition (VMs to containers) world. This is a huge move and perhaps the best of the lot at VMworld.
There were several other fascinating announcements a bit more outside of my core coverage space so I encourage you to digest the press releases. Gartner clients should then contact the appropriate analyst at Gartner for a more in depth inquiry about what each announcement means for your organization.
What did you think about VMworld 2014?
The term ?thought leader? has always rubbed me wrong. It sounds self righteous and pompous. Not unlike ?hip? or ?cool,? it?s something you really can?t bestow upon yourself without wholly cancelling its effect.
Other superlatives subject to this rule? Guru. Visionary. Don?t even get me started on growth hacker.
But social media is both ecosystem and egosystem. Here, self-professed thought leaders and self-appointed exemplars have something of a cosmic quality, both infinite in number and sometimes perhaps just a bit starry-eyed by what they have to say.
Don?t get me wrong: the medium has bred plenty of legitimate geniuses with something of great value to share. Lots of them. Seth Godin, Vala Afshar, Ted Rubin, Ben Horowitz all come to mind. These are examples of thought leaders who get it right, in my opinion.
But the question that occurs to me is: When does thought leadership become less about the audience and more about ego? When does it become less about the thought than about the leader themselves?
This question occurred to me over the weekend while reading ?Of Myself I Sing,? Teddy Wayne?s excellent Sunday NY Times opinion piece on the slippery slope of self promotion. Wayne suggests that ?much self-promotion on social media seems less about utility and effective advertising and more about ego sustenance.?
The fact that social media has certain narcissistic qualities isn?t news to any of us. If you?re a Facebook user, you see daily highlight reels in your newsfeed to this effect. But, as content marketers, how do we ensure our audience?not our egos?remain true north?
Here are some tips:
Believe me: I recognize the potential irony of my writing these words from what perhaps you’ll see as my own starry-eyed perch. I write this blog out of habit, for practice, to test ideas, and, frankly, for fun. I write it for you, but also for me.
I can only hope that it?s more about you than about me, but none of us is fully immune. We can only live by a set of principles and consciously try to do better.
This is a guest blog from Joe Skorupa.
The data center market has enjoyed years of relative stability and gradual technological evolution. That is about to change. In response to the early warning signs in this market, Gartner commissioned a new body of research to help our Technology and Service provider clients deal with the impending disruptions. We published the first research note today:
Four Highly Disruptive Factors Will Challenge the Survival of Incumbent Data Center Market
Joe Skorupa | Adrian O’Connell | Errol Rasit | Jeffrey Cole | Michael Warrilow | Roger W. Cox
Summary: There are four market disruptions in play in the DC infrastructure market. Elements of them are already in play, and will become visible no later than early 2016; however, radical action by just one significant player could accelerate the market disruption of any of the factors. …
A few key takeaways include:
Gartner predicts that by year-end 2016, the DC market will undergo drastic change, driven by four disruptive factors: highly disruptive competition, big cloud provider dominance, economic warfare and nationalism. Elements of these disruptive factors are already in play, and will become visible no later than early 2016; however, radical action by just one significant player could accelerate the market disruption of any of the factors.
By year-end 2017, DC infrastructure vendor gross margins will contract by up to 5 percentage points below current levels. This research was created to help vendor CxOs to understand and prepare for a new DC market. However, it can also help CIOs and VPs of I&O to understand vendor market positions to and to develop a process to assess risk.
This post is being writing while attending VMworld, where VMware CEO, Pat Glesinger, stressed disruption on his keynote and where VMware announced their first hyperconverged infrastructure offering. Clearly the disruptions are underway.
Bienvenidos a ENTER al día, estas son las noticias del día más importantes en el mundo de la tecnología y la cultura digital. Continúa leyendo en ENTER.CODeja un comentario en ENTER al día: Famosas desnudas, Android One y celulares gama alta, 2014 ENTER.CO
Cuando mostramos los tips de dónde colocar sus parlantes para tener el mejor sonido notamos un gran problema: los espacios reducidos no nos permiten tener un buen sistema envolvente. Para solucionar este lío, las compañías decidieron crear las barras de sonido como una solución secundaria a quienes quieren calidad para su música y sus películas, […]
Tras el escándalo que se generó este fin de semana por la filtración de varias fotografías comprometedoras de celebridades que fueron víctimas de algún ataque malicioso en la red que permitió el acceso a sus cuentas en iCloud, The Verge compartió un estudio el cual señala que el envío de selfies sin ropa ha crecido […]
Llegó IFA, y con eso tenemos también la llegada de nuevos anuncios que ya se están empezando a filtrar a pocos días de la exhibición. Hoy el turno es para Sony, con algo que parece ser la tercera entrega de su SmartWatch, acompañado de la SmartBand Talk, la segunda versión de su pulsera inteligente enfocada […]
La tienda de aplicaciones de Apple es una de las más dinámicas en el mundo. La organización, en su más reciente presentación, aseguró que hay 1.2 millones de aplicaciones en el App Store, una cifra que seguramente ha crecido desde junio. Sin embargo, Apple tiene un complejo proceso de curación y muchos desarrolladores se quedan […]
Muchas veces entre las mujeres y los hombres surgen malentendidos porque ni ellos entienden nuestros sentimientos y tampoco nosotras entendemos los de ellos. Sin embargo, Google Glass podría ayudarnos a zanjar estas históricas diferencias.Continúa leyendo en ENTER.CODeja un comentario en Ahora lee las emociones de las demás personas con Google Glass, 2014 ENTER.CO
Hace unas semanas, BuzzFeed, el famoso sitio de contenidos virales y populares listas, recibió una inversión de 50 millones de dólares de la firma de inversiones Andreesen Horowitz. Esto hizo que la compañía se valorará en unos 850 millones de dólares y empezara a buscar nuevas alternativas de expansión como es el caso del desarrollo […]
Para los que están esperando una séptima entrega de la saga central de ?Resident Evil?, el tiempo de espera puede extenderse un poco más, pues Capcom ha revelado que su siguiente paso con la franquicia será una secuela al exitoso título ?Resident Evil: Revelations?. Durante una pequeña presentación de prensa, previa a lo que conoceremos […]
Project Morpheus, la apuesta de realidad virtual de Sony, es un dispositivo que nos acerca al increíble mundo de los MMO virtuales (ojalá como SAO pero sin morir). En Japón, el proyecto ya está buscando crear mucho contenido de interacción como animes, juegos junto a Hatsune Miku y recorridos virtuales. Como reporta Kotaku, durante la […]
Muchos de los usuarios están esperando la llegada de Android L en los próximos meses con muchas novedades, como el rediseño de la interfaz y mejoras en el rendimiento del sistema operativo. Sin embargo, la verdadera revolución puede ocurrir en otro terreno: la gama de entrada. De acuerdo con información de NDTV, Google envió invitaciones […]
Avenida 15 # 104-30 Of.305 Bogota, D.C., Colombia / PBX (571)467-3939/ 386-0994. Movil (57) 315 331-1740
Miami, FL., E.U. / Phone:(786)467-6722
Copyright 2014. All Rights Reserved.
Designed by ETRADE GROUP SAS.