 |
 |
|
 |
|
|
|
How to grant LibreOffice file open permission on Chrome OS | | Link: https://www.techrepublic.com/article/how-to-grant-libreoffice-file-open-permission-on-chrome-os/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | LibreOffice is a great addition to your Chromebook, but out of the box, you won't be able to open any files. Jack Wallen shows you how to fix this problem.
Image: Jack Wallen
If you're like me, the Chromebook is an attractive solution that can easily meet many mobile needs. They are cheap alternatives to far more expensive hardware and they perform like champs--rarely, if ever, suffering under the weight of issues that can plague other platforms.
That doesn't mean the Chromebook is perfect. There is at least one glaring issue that some users might come up against--the lack of a standard office suite. Don't get me wrong, Google Docs and Drive make a powerful team (one that I use every day), but there are instances where you might prefer to have a standard office suite of tools. The most obvious reason being an easier route to offline work. Although it is very possible (and mostly simple) to use your Chromebook to work with documents offline, it must be set up to do so.
If you don't want to worry about such things, and you'd like your Chromebook to behave more like a standard laptop, you might have already enabled Linux support for Chrome OS.
SEE: VPN usage policy (TechRepublic Premium)
With that step taken care of, you might have also installed LibreOffice. Then, you probably ran into a serious problem: LibreOffice can't open any files.
What gives? Why enable the installation of a piece of software, knowing that piece of software doesn't work out of the box? The explanation is actually pretty simple. LibreOffice uses file locking to prevent others from opening the same file as you. This is not only unnecessary on your Chromebook, it's also what prevents the tool from opening files in the first place. So, we have to disable it.
Why set this feature out of the box knowing it'll prevent LibreOffice from working? Since there isn't a version of LibreOffice specific to Chrome OS, you're actually installing the Linux version of the tool. Because of this, file locking is enabled by default. Fortunately, all you have to do is disable file locking and you're good to go.
Although that does make for a serious hurdle to using LibreOffice on Chrome OS, the good news is you can fix that problem.
I'm going to show you how.
What you'll need
A Chromebook that supports Linux app installation
LibreOffice installed on your Chromebook
How to disable file locking: Part one
In order to do disable file locking you first must edit a config file. Open the Linux terminal window on your Chromebook and install the nano editor with the command:
sudo apt-get install nano -y
Once that's installed, open the necessary file for editing with the command:
sudo nano /usr/lib/libreoffice/program/soffice
Look for the following lines:
# file locking now enabled by default SAL_ENABLE_FILE_LOCKING=1 export SAL_ENABLE_FILE_LOCKING
Comment out both of the lines like so:
# file locking now enabled by default # SAL_ENABLE_FILE_LOCKING=1 export # SAL_ENABLE_FILE_LOCKING
Save and close the file.
That's it for the command line.
How to disable file locking: Part two
Now we must configure LibreOffice to not use Locking. To do that, open LibreOffice and then click Tools | Options. In the resulting window, click Advanced. From within the Advanced section, click Open Expert Configuration (Figure A).
Figure A
In the Expert Configuration window, search for "useLocking." Double-click the True entry, so it reads False (Figure B).
Figure B
Next, search for "UseDocumentOOoLockfile" and "usedocumentsystemfilelocking" and make the same change to each (from True to False).
Once you've made those changes, click OK in the Expert Configuration window, close out the Options window, and restart LibreOffice.
At this point, you should finally be able to finally work with documents on your Chromebook with LibreOffice. Your Chromebook is now even more productive.
Open Source Weekly Newsletter You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays Sign up today
Also see | AI company pivots to helping people who lost their job find a new source of health insurance | | Link: https://www.techrepublic.com/article/ai-company-pivots-to-helping-people-who-lost-their-job-find-a-new-source-of-health-insurance/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | Jvion helps people find coverage, which keeps hospitals from footing the bill for uncompensated healthcare.
Image: Anadmist, Getty Images/iStockphoto
An artificial intelligence company has pivoted from finding the right treatment plan for patients to helping them find health insurance to pay for that care. At the start of the pandemic, Jvion used its Eigen Sphere engine to build a map that shows which residents in a community are at highest risk for contracting the coronavirus. Now the company is helping hospitals protect the bottom line by identifying which patients are most likely to have lost insurance and helping them find a new policy.
Many people who have lost their jobs due to the coronavirus pandemic also have lost their health insurance. The Commonwealth Fund found that 41% of people who lost their jobs had insurance through that job. The Kaiser Family Foundation estimates that four out of five Americans who lost health insurance when they lost their jobs can get coverage through the Affordable Care Act or Medicaid, but many do not know about it.
Dr. John Showalter, chief patient officer at Jvion, said that the company's CORE analytics engine has always been able to understand how people use or don't use the healthcare system. The company originally focused on matching the right care with the right patient based on age, illness, and other factors. When the COVID pandemic started and people delayed healthcare visits and lost insurance, the company shifted its focus to understanding who is most in need of insurance coverage.
The Jvion CORE generates two types of recommendations: How to get insurance coverage and how to get preventative care and avoid hospitalizations. Coverage recommendations focus on identifying uninsured patients and helping them enroll in the most effective health insurance plan for them.
"Based on estimates in our clients' data, over 25% of patients with uncompensated care qualify for Medicaid and an additional 20% or more qualify for ACA marketplace plans," he said.
SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)
Showalter said that Jvion data sets have always included the patient's insurance information, but insurance coverage has not been an important data point previously.
While this helps individuals manage their diabetes and other ongoing health problems, it also helps hospitals improve the bottom line because when more patients have insurance, hospitals provide less care for free.
The Affordable Care Act significantly reduced these costs for hospitals. According to the Center on Budget and Policy Priorities, the nationwide uninsured rate fell 35% from 2013 to 2015, and nationwide hospital uncompensated care costs fell by about 30% as a share of hospital budgets a $12 billion drop in 2015 dollars. In states that expanded Medicaid, the number of uninsured people dropped the most and uncompensated care dropped by 57% on average.
Showalter said that the company's research suggests that up to two-thirds of hospital visits can be prevented by incorporating Jvion's insights into preventive care management programs for asthma, cancer, diabetes, and heart disease. This kind of care involves preventative care, patient education, and addressing the social determinants of health.
The Centers for Medicare and Medicaid Services has recently started reimbursing doctors and nurses for providing these check-up services. The idea is that receiving advice from a healthcare professional on a regular basis can keep a person out of the hospital or nursing home, which is better for the individual and saves money on healthcare costs at the same time.
The Jvion CORE does not use reimbursement rates to identify patients most in need of care but the algorithm does consider coverage and patient costs when identifying patients who may be underinsured.
"If a patient is highly likely to need a lot of care, a high deductible plan is probably not a good option for them financially," Showalter said.
Can AI replace insurance navigators?
In addition to making health insurance somewhat easier to get, the Affordable Care Act funded navigators who helped individuals choose the right insurance plan. The Trump administration cut funding for the navigators from $63 million in 2016 to $10 million in 2018. During the 2019 open enrollment period for the federal ACA health insurance marketplace, overall enrollment dropped by 306,000 people.
"While that may not seem like a lot, the average annual medical expense is around $3,000 per person, and a shortfall of covered patients could represent over $900,000,000 of medical expenses will not be paid by health insurance," Showalter said.
When states banned elective medical procedures temporarily during the early months of the pandemic, this cut off an important revenue stream for hospitals and many laid off workers. Some of these layoffs included patient navigators who helped patients enroll in health insurance, particularly Medicaid.
Showalter said that all Jvion customers have had at least a few navigators on staff but not enough to reach every patient in need of assistance.
"To bridge the gap, we are deploying advanced technologies, like chatbots, to support our clients and we will also be deploying a full-service option in the fall," he said. "Our clients will simply provide us with their data and we will do the restidentify, engage, and enroll."
Mapping the risk of contracting COVID-19
Jvion also built a map to help businesses and public health officials understand the varying COVID-19 risks based on location. This map estimates the risk of contracting the virus based on vulnerability calculations that include access to jobs, access to healthcare, race, length of commute, and environmental risks.
The analysis identifies specific census tracts where the population is at greater risk of hospitalization and mortality due to COVID-19. After studying research from the US Centers for Disease Control and Prevention on environmental factors that led to certain outcomes with respiratory infections, Jvion took anonymized data on 30 million Americans and combined it with models of patients with comparable respiratory infections, as well as with virus and geolocation data.
The service provides individual employees information about their risks so they can request appropriate protections at work. It also alerts employers to areas where a high number of employees are vulnerable so they can implement appropriate safeguards.
Data, Analytics and AI Newsletter Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays Sign up today
Also see | Five ways tech is going to change everything about going to school | | Link: https://www.techrepublic.com/article/five-ways-tech-is-going-to-change-everything-about-going-to-school/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | Now more than ever, the pressure is on for the education sector to innovate. TechRepublic looks at five key ed-tech trends expected to emerge in the post-pandemic landscape.
Study: Millennial parents are more likely to consider AI for their kids' health Watch Now
The COVID-19 crisis has forced the education sector to digitise as much and as quickly as possible to keep contact between students and teachers continuing through lockdown.
The pandemic has also highlighted long-running issues around traditional teaching models. It's also shown that there is no quick fix, and that the success of digital transformation in education requires a combination of both cultural and technical answers with teachers and the student experience at the forefront.
With industries around the world now starting to consider what the new normal might have in store, TechRepublic spoke to ed-tech sector experts to find out the key trends that will define the future of education.
1. Blended learning
Social distancing is likely to be in place for the foreseeable future, meaning schools and academic institutions will have to reassess face-to-face teaching and look towards learning models that blend both in-person and online learning
Blended learning can range from online tests, discussions, interactive learning materials to video content, with some but not all elements still completed face to face.
A number of UK universities have already moved lectures online in response to COVID-19, with Cambridge University having announced that it will continue doing so for the remainder of 2020.
Imperial College Business School has successfully shifted all learning online, with the university last year proclaiming to have become the first in the world to deliver live lectures using hologram technology.
"We are likely to see an increase in demand for online and blended programmes, particularly at the postgraduate level," says David Lefevre, director of Imperial College Business School's EdTech Lab.
"This increase in demand will be met by an increase in the ability for universities to deliver online learning. The sector is witnessing an unreprecedented volume of innovation with regard to digital learning and many institutions have moved far beyond the initial move to remote teaching via webinars," he says.
However, the shift to this new model will need to be carefully thought-out, and will not be as straightforward as transplanting traditional courses onto a Zoom or Teams call.
"It is crucial remote-learning initiatives are carefully planned and thought-out with students in mind not only to guarantee their safety and continue with their higher education, but also to ensure the best, most engaging and personal educational experience possible," says Stewart Watts, VP EMEA at education software provider D2L.
2. Chatbots and AI assistance
Chatbots and automation technology have matured quickly during the COVID-19 pandemic, proving crucial in allowing organisations and even entire industries to automate back-office processes and interaction with customers while limiting face-to-face contact.
Some forms of automation have been in use in education for years using computers to score multiple-choice exam questions, for example. However, COVID-19 has highlighted an emerging trend of using AI-type technologies to assess more qualitative, written and even spoken exams.
Mike Fenna, chief technology officer at Avado teaching academy, says chatbots could eventually be used as "virtual tutors" to monitor students' progress and provide real-time feedback.
"Chatbots have been around for a while, but to provide effective support for learners, they need to move beyond being a way of searching frequently asked questions, and instead integrate with existing learning systems so that they can provide personalised information to the learner, and also to make changes for the learner in those systems."
SEE: Robotic process automation: A cheat sheet (TechRepublic)
Douglas Winneg, executive vice president of education-assessment service PSI Education, says there had been a "surging demand" for secure, remote assessments technology in the wake of COVID-19.
Emerging technologies such as biometric identity management, data forensics, and AI paired with human proctors will accelerate the delivery of secure remote assessments, Winneg tells TechRrepublic.
"A variety of different tools already exist to securely administer assessments online, including live remote proctoring, record and review remote proctoring, secure lockdown browsers and rigorous ID checks," he says.
"When so little is certain about the timescale and ongoing impact of the pandemic, organizations urgently need to review their ability to not only teach and assess students remotely, but do it with the right technology, ensuring that security, integrity, and student privacy are at the forefront."
3. Personalised learning
Another potential application of AI in education is using predictive analytics to figure out how each pupil learns best whether that is through video, interacting with apps or with other students and then adapting how the course is delivered to suit it.
This sort of personalised learning is considered vital to ensuring teachers can provide education tailored to individual children's needs, while also allowing kids to take greater control of their own education and seek out their own niches in the technology-driven workplace of tomorrow.
Without the format of a physical classroom or direct supervision of a teacher during COVID-19 lockdowns, students have already been taking on more responsibility for their own education, says Simone Martorina, business manager for visual instruments at Epson UK.
As a result, this new generation of 'meta learners' will be more empowered to explore topics that interest them and approach tasks independently "in unique and creative ways," Martorina tells TechRepublic.
This is likely to encourage students to self-educate more confidently, while also shaping the way schools operate moving forward, she says. "Once it's safe to do so, future learning in the classroom will become increasingly tailored and personalised to the individual, with teachers acting increasingly as guides.
"This shift will be supported by the integration of new technologies such as artificial intelligence, augmented reality, 3D printers and robots into future schools."
Andy Moss, managing director of corporate learning at City & Guilds Group, points out that moving learning and development fully online is more than simply adopting digital technologies something that academic institutions will want to consider in their wider strategies around how learning is managed, organised, structured, designed and delivered to suit pupils' learning.
"That means an even greater focus on the learner experience we deliver creating engaging, effective and impactful programmes," Moss adds.
"Programmes that allow learners to take much greater control of their learning journey to self-pace, and to self-direct which actively foster the desire to learn and encourage experimentation, both individually and through peer-learning."
4. Confidence in cloud (and beyond)
Amanda Jackson and Dave Smith, senior inspectors at the UK's HES School Improvement Services, say there has been "a shift in mindset" of teachers and parents who may have been more resistant towards the use of technology before COVID-19, particularly around the benefits of the cloud.
"In particular, we have seen increased engagement with cloud services including Google Classroom and Microsoft 365 and cross-curricular online platforms such as GCSEPod and Purple Mash," they tell TechRepublic. "All of this will help improve access to technology."
Teachers and pupils have also become familiar with cloud-based tools over the past few months, particularly video-conferencing services. While it's unlikely that lessons held over Zoom or Microsoft Teams will become the norm, we will see an increase in the use of this sort of technology in classrooms going forward, says Chris Ashworth, head of public benefit at Nominet.
"Teachers will have learnt and improved technical capabilities and will be more confident in incorporating tech into their lesson plans and activities, finding new ways of teaching subjects online which can be both fun and interactive," Ashworth tells TechRepublic.
"This will be supported by the many resources that are now available to them online, allowing their students to do everything from explore the inside of a cell to re-live the battle of Hastings via their devices."
SEE: Top cloud providers 2020: AWS, Microsoft Azure, Google Cloud, hybrid, SaaS players (freePDF) (TechRepublic)
However, Jackson and Smith also point out that schools will need to invest in the necessary infrastructure to enable these new ways of working including the basics of strong Wi-Fi and broadband connectivity.
"While we are likely to see a continued rise in cloud services, an important priority for schools to consider and invest in, is whether they have sufficient broadband and infrastructure measures to cater for a more digitised way of teaching and learning," they say.
"Having this as a key part of their strategy moving forward will help to ensure the experiences in the classroom and the content they're delivering to those in multiple locations is more joined-up."
Ashworth shares similar views: "Nearly two million households across the UK still don't have access to the internet, with recent figures claiming that one million children are struggling to access educational support during the lockdown due to poor internet connections," he says.
"Digital exclusion and the inequality of access have never been so obvious as they are during this pandemic."
5. Tech literacy and digital skills training
Of course, everything that's been outlined above can be disregarded if teachers don't have the necessary skills for or in the technologies they use to help educate the class of tomorrow. While hybrid models of video and online learning have mostly worked during the pandemic, more investment in digital skills training is needed if remote learning is going to become anywhere close to the norm.
Rachel Gowers, director at Staffordshire University London, tells TechRepublic she anticipates an increased uptake in both practical and technical courses focused on teaching digital skills following the pandemic.
"You can't teach coding from a textbook it needs to be done online, and companies that provide new and engaging ways to do this will become popular and sought after," Gowers says.
"One of the largest trends we are expecting to see over the coming months is a surge of people who want to come back and retrain or upskill after being made redundant, or those recognising that their current jobs aren't very future-proof."
Atif Mahmood, CEO and founder of remote-teaching platform Teacherly, says there will be more pressure on schools to innovate now that staff and pupils have seen the value of ed-tech, as well as having the opportunity to hone their digital skills over the course of the pandemic.
"We've also seen more collaboration between teachers and schools who are more keenly sharing knowledge in order to speed up digital professional development and the creation of better online lessons for increased engagement," Mahmood tells TechRepublic.
"This has only been possible due to digital platforms that provide lesson-planning templates and online resources that can be easily shared between networks of teachers. As the digital transformation of education continues, it's likely that we'll see this become more widely adopted."
Data, Analytics and AI Newsletter Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays Sign up today
Also see | Top 5 steps for good data science | | Link: https://www.techrepublic.com/article/top-5-steps-for-good-data-science/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | Data science entails more than just collecting and analyzing data. Tom Merritt lists five basic data science steps.
Top 5 steps for good data science Watch Now
A lot of people talk about data science. Few of them know what they're talking about and even fewer are aware of how it works. But, it's used everywhere these days, so even if you aren't a data scientist, it's good to know what the basic steps are. Here are five basic steps for data science.
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
Why are you doing it? Are you solving a problem? What problem is it? Data science is not a sauce you spread on things to make them better somehow. It's a way of addressing issues. Know what problem your business is trying to solve before you ask data science to solve it. Collect the data. Once you know the business reason, your data scientist can start figuring out what data pertains to it and collect it. Don't just pick the available data or you risk introducing bias.
Analyze the data. Exploratory data analysis (EDA) is the most common approach. It reveals what the data can tell you. EDA is often good at revealing areas where you want to collect more data. Good EDA uses a predefined set of guidelines and thresholds to help overcome bias.
Build your models and test if they're valid. Once you have the data analyzed you can make your machine learning model that aims to provide a good solution to the business problem. Before settling on a model, be sure to experiment with a few suitable options and validation cycles.
Results. Run the model and interpret the results. A lot of folks don't realize that artificial intelligence doesn't just tell you the solution to your problem. Machine learning models deliver output that humans interpret. The data scientist's insights are what make the output something you can take action on.
Sure this makes it sound "that easy," and obviously any data scientist knows the proof is in all that work to make these things happen, but knowing the basics can help you make better decisions that will help your data scientists do their job better. Everybody wins. Even the machine.
Data, Analytics and AI Newsletter Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays Sign up today
Also see | Top 5 steps for good data science | | Link: https://www.techrepublic.com/videos/top-5-steps-for-good-data-science/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | How Under Armour's HOVR connected shoe brings new data and automated insights to runners
CES 2018: With HOVR, Under Armour used app data to try to build the perfect running shoe | Jobs slowed in July, yet net IT employment is up | | Link: https://www.techrepublic.com/article/jobs-slowed-in-july-yet-net-it-employment-is-up/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | Even though US tech sector postings scaled back in July, information technology roles have grown by more than 203,000 positions since the COVID-19 outbreak, according to a new report.
Image: iStock
US employment declines were recorded in July, but despite the losses, net IT employment remains up by more than 203,000 positions since the outbreak of the coronavirus, according to analysis from the nonprofit COMPTIA, an association for the global technology industry.
The unemployment rate for IT jobs is 4.4%, which is less than half of the national employment rate of 10.2%, even though 17,000 positions in tech were scaled back, according to COMPTIA using data gathered from the US Bureau of Labor Statistics latest Employment Situation report.
For the last five years, an annual average of six months marked monthly job gains and the remaining six months, monthly job losses, according to the organization. YTD in 2020, there have been five months of job gains and two months of job losses.
SEE: Return to work: What the new normal will look like post-pandemic (free PDF) (TechRepublic)
"After several months of tech job gains exceeding expectations in a very difficult economic environment, a pause in tech hiring was not unexpected," said Tim Herbert, executive vice president for research and market intelligence at CompTIA, in a press release.
Image: COMPTIA
US technology job postings totaled more than 235,000 in July (up from 220,000 in May)while that number seems substantial, it represents a decline from the top US tech jobs posted during the pandemic (admittedly at the very start) in March, when figures were more than 350,000. Charts featured here are listed starting with the highest in the category.
Top of the charts
The top five job openings in the technology sector in July:
Software and application developers (70,600 job postings) IT support specialists (21,400) Systems engineers and architects (19,100) Systems analysts (15,600) IT project managers (13,300)
July's top employers for IT job postings:
Amazon Humana Anthem Blue Cross Stanley Black & Decker Applied Materials IBM Raytheon Leidos Guidehouse General Dynamics
Top IT positions in July for remote job postings:
Software developers, applications IT support specialists Systems engineers and architects Web developers Systems analysts Cybersecurity analysts Database administrators IT project managers Network and systems administrators Business intelligence analysts
Image: COMPTIA
States with the highest number of IT job postings for July:
California Texas Virginia New York North Carolina
In the chart for top states for remote IT jobs the list is the same as above, with the exception of the No. 5 position: Virginia instead of North Carolina.
And, while down from June, these metropolitan areas led the July list for the most IT job postings:
New York Washington Dallas San Francisco Los Angeles
Among specific industries, the highest numbers of US job openings for IT positions
Professional, scientific and technical services (39,956) Finance and insurance (18,756) Manufacturing (17,473) Information (11,095) Retail trade (7,042)
Image: COMPTIA
Despite modest gains, these states had the highest month-over-month increase in job postings:
Louisiana Mississippi West Virginia Maine North Dakota
The analysis included charts for the top states and top metro areas for both IT and remote IT job postings comparing June and July, as well as the top industries for IT job postings. Lastly, the report includes a historical unemployment rate trending for the decade, for IT occupation. See chart below.
Image: COMPTIA
Developer Essentials Newsletter From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Weekly Sign up today
Also see | With 5G, edge computing and IoT will surge: Now's the time to upgrade your edge | | Link: https://www.techrepublic.com/article/with-5g-edge-computing-and-iot-will-surge-nows-the-time-to-upgrade-your-edge/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | Moving computing to the edge eases the stress on bandwidth and speeds processing and responsiveness, allowing more bandwidth-heavy technologies, like AR and VR, to soar.
Image: metamorworks, Getty Images/iStockphoto
The edge computing market is projected to grow by a compound annual growth rate of 19.9% between now and 2025. Companies are aggressively deploying Internet of Things (IoT) devices at the edges of their networks and their enterprises, in the residences of customers, and in the field. These devices send, receive, and process data.
The IoT onslaught is causing companies to rethink their IT processing architectures, which to date have remained focused on centralizing data and processing. The burning question in organizations is, do you consider moving some of your processing to the edge?
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
"Technologies like AR (augmented reality), VR (virtual reality) and autonomous vehicles are expanding," said Kurt Michel, SVP at Veea, an edge infrastructure provider. "If companies and consumers want real-time responsiveness for these technologies, they have to find a way to move processing away from the center of the enterprise and out to the edge."
The direction in companies now is to move more processing or at least pre-processing to the edge. This helps reduce bandwidth and network demands. In this way, data can be transferred to central data repositories that are in-house or in the cloud later, at times when bandwidth and other network resources are more available.
This focus on when to move data has primarily been about the logistics of when it is best to move databut the next wave of edge deployment and data and processing planning is likely to focus on how to derive and process data for business value out of all of this IoT.
SEE: Companies are adopting AR and VR to adapt to the COVID-19 pandemic (TechRepublic)
In some cases, business value from data can best be derived when the IoT receives, processes, and stores data on its ownwithout passing the data on. In other cases, enterprises will want IoT data aggregated with data from other core enterprise systems so holistic technologies like AI and analytics can be supported.
Regardless of the IT architecture choices a company makes for its edge/IoT and its centralized core computing, being able to mix and match a variety of data and processing deployments for business results will be an important focus.
"One of the ways that we address the need to process more data directly at the edge in these new data architectures is through a process we call Edge as a Service (EaaS)," Michel said. "With Edge as a Service, a company can begin to unlock services for processing, data storage, etc., directly at the edge. The company can establish "micro" data centers that are only a hop or two away from where the data is being generated. This deployment enables more effective bandwidth usage, and it can also be scaled out to support many IoT devices and data collection points."
SEE: 5 successful smart city projects (free PDF) (TechRepublic)
Here is an example of EaaS in action: In Korea, the Seongnam Cultural Foundation is using AR to engage and educate visitors as they learn about the lives and spirits of 100 historically important Korean activists. The foundation is doing this by running AR on an EaaS platform so visitors can view and hear 3D animated characters on their smartphones as they move through the exhibit. The technology behind the AR consists of smart edge nodes that combine Wi-Fi hotspot access with local server processing and storage. The technology connects with the visitors' smartphones.
"Through the years, we've seen the pendulum swing between centralized and distributed processing," Michel said. "So, in a sense what we are witnessing with EaaS now is nothing new. It is just another pendulum swing from centralized processing to more processing at the edge that is readjusting that balance between the data and processing that we centralize, and that which we distribute. Now with the coming of 5G, there will be more push to deploy AR, VR, and other bandwidth-heavy technologies. Taking a look at EaaS is one way that IT can get ready for it."
Smart Cities and IoT Newsletter Stay informed about smart cities tech, which includes innovations in IoT, 5G, security, data analytics, mobile apps, and more. Thursdays Sign up today
Also see | AI and machine learning facilitate pioneering research on Parkinson's | | Link: https://www.techrepublic.com/article/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | At the 2020 Machine Learning for Healthcare Conference, IBM and Michael J. Fox Foundation will reveal a disease progression model that accurately pinpoints how far a patient's PD has advanced.
Image: iStock
A long-sought understanding of Parkinson's Disease (PD) will be revealed at Friday's 2020 Machine Learning for Healthcare Conference. In early 2019, IBM Research and The Michael J. Fox Foundation (MJFF) announced plans to collaborate and use artificial intelligence (AI) and machine learning (ML) to decode the elusive and complex mysteries surrounding PD symptoms and progression.
SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)
IBM and the MJFF have built an innovative disease progression model that helps clinicians more accurately pinpoint the exact status of a PD patient's progression. Despite PD's first identification more than a century ago in 1817, how it affects patients during the course of the disease has been an undertaking that previously evaded both doctors and researchers.
Yet many questions about the chronic disease remain unanswered, but a better understanding through clinical trials can improve patient-care management and more efficient development of mitigating drugs.
Machine learning has helped attempts to grasp the complexities surrounding PD. The team designed innovative algorithms that use factors that can mask the outward appearance of someone's PD, including medications that can palliate symptoms such as tremors, improve motor control, and modify other common symptoms.
PD is a neurological disorder that affects a person's movements and often includes tremors--dopamine levels drop because of brain nerve-cell damage. It usually starts with tremors in one hand, but other symptoms that develop from the potentially lifelong disease--which remains incurable--are loss of balance, stiffness, and slow movement.
Since PD's underlying biology is still unknown, it has been onerous for doctors to determine how advanced the disease is by just judging a patient's outward appearance. It's difficult to detect the connection from disease states to biological mechanisms. If a patient is on medication (as is often the case), the physician is further challenged, as medications can mask some symptoms.
SEE: Managing AI and ML in the enterprise 2020 (free PDF) (TechRepublic)
PD patients do not react to medications, develop symptoms or related issues in the exact same way, making progression not straightforward, and difficult to define, and, the development of understanding and classifying stages very difficult. The collaborative study takes into consideration the effects of different medications, which may manifest differently in each individual at different stages--this had not been explored previously.
IBM will further use a vast amount of PD patient data, aggregated by the MJFF, in the hopes of discovering new results that can accurately define each stage of PD as it develops; if this stage is developed clinicians will be assisted in designing more accurate and customized treatment plans. Achieving the goal will also provide drug developers with more accurate levels when recruiting for clinical trials of new treatments and potential cures.
Further, the team hopes that the research might be inspirational or useful in the examinations and research into other chronic conditions, such as diabetes, Alzheimer's disease, and ALS. The next stage for IBM Research and the MJFF will be to focus on the recent discoveries, from the application of the new models, combined with the extensive data the MJFF has provided.
PD is one of the top 10 causes of death in those 65 and older, and it's estimated that 6 million people worldwide, and one million people in the US have PD--these figures are expected to double by 2040, making research and even more understanding critical and urgent.
Innovation Newsletter Be in the know about smart cities, AI, Internet of Things, VR, AR, robotics, drones, autonomous driving, and more of the coolest tech innovations. Delivered Wednesdays and Fridays Sign up today
Also see | Why plot-driven data storytelling is important and how to create it | | Link: https://www.techrepublic.com/article/why-plot-driven-data-storytelling-is-important-and-how-to-create-it/#ftag=RSS56d97e7 | | Published Date: 2020-08-12 | Data storytelling can yield significant benefits in informational analysis, but it requires skill and expertise. Learn some tips from data experts to get the most out of the experience.
Image: GaudiLab, Getty Images/iStockphoto
The age-old problem with data analysis is making the best use out of the information obtained by carefully parsing it for conclusions. It's not an easy task, so there's a reason data storytelling has become such a popular and lucrative career. Separating the wheat from the chaff is a fine art honed by extensive experience.
SEE: Big data management tips (free PDF) (TechRepublic)
I spoke with Keelin McDonell, general manager of business intelligence and integrations at Narrative Science, an artificial intelligence (AI)-powered software startup that turns data into stories; and Jolene Wiggins, CMO of Gravy Analytics, a data analysis organization. Bill Hewitt, CEO of Aternity, a digital experience management solutions provider, added some thoughts to the conversation, too.
Scott Matteson: What are the challenges companies face in their ability to quickly act on data?
Keelin McDonell: Companies are dealing with more data than ever before. The size of our data universe doubles every two years. We're now sifting through so much data that it's almost become meaningless because companies today don't have the context they need to understand what their data is telling them.
Companies need to act on data faster than ever before. Data depreciates fast, and everyone in the businessfrom data analysts to sales, marketing, and customer success teamsneeds to be able to receive, understand, and act on insights from data in real time. This lets them get ahead of their competitors and stay nimble in a landscape that's always changing. Spending too much time puzzling over charts, graphs, and other data visualizations is time that could be spent making the next major business decision to get ahead.
There are a number of business intelligence tools that have tried to tackle the data problem (the market is worth about $30 billion and grows 15 percent each year). But many of these tools are designed for people with data analytics backgrounds, so they're not easy to use for people in other departments who rely on data to make major business decisions every day.
SEE: 4 tips for using data visualization in a board presentation (TechRepublic)
According to Gartner, at a typical company these data analytics and business intelligence tools only have about 25 percent penetration, suggesting that three-quarters of employees find them too difficult, too time-consuming to use, or don't have the skills to use them at all.
Jolene Wiggins: The biggest hurdle between data collection and analysis that keeps companies from acting on data in a timely manner lies in the organization's data structure. To get the most holistic view, companies need to pull data from several internal and external sources, which can be a very time-consuming and tedious process made more complex by different data formats and management systems.
Scott Matteson: How can those challenges be addressed?
Keelin McDonell: We think the easiest way to help companies act faster on their data is by presenting it through stories and language. That means providing plain-English stories about what the data is telling you, as opposed to a bunch of scatter plots and pie charts.
There are a variety of benefits to doing it this way.
Meet and exceed goals faster. Because you can devote more resources to where they will have the biggest impact, and ambitious goals become more realistic.
Democratize data for the entire company. By presenting data in the form of a story, literally anyone in the business can understand what the data is telling them without having to pore over complex charts and graphs. What's more, research shows people remember information better when it's in the form of a story.
Make decisions faster. When you know exactly what the data is telling you, you can confidently make major business decisions without having to worry if you've read a chart or dashboard wrong. Meetings can be spent discussing what really matters as opposed to asking the room to read your pie chart.
Get everyone on the same page. Charts and graphs are open to interpretation by the person reading them. By presenting data as a story, you reduce the chance that two different departments are arriving at two different conclusions from the same data visualization.
Improve resource allocation. Spend less time reading data, and more time on tasks that move the needle, like drafting a new marketing email or putting more money behind a social media post.
Jolene Wiggins: Overcoming roadblocks to acting quickly on data depends on data integration: Combining data originating from different sources into a single location with unified processes. Also, companies need to fundamentally find a way to make data part of the culture of the organization at every level. This culture is much easier to cultivate when the collection and structure of data is unified across organizations, making it quick and easy for the right people to access regardless of team. When companies have the right systems in place to provide access to data, and the right resources to analyze and pull learnings from that data, then data can become central to operations and decision-making.
Scott Matteson: What types of responsibilities do data analysts possess?
Keelin McDonell: Data analysts collect information about the company's current and potential customers and use it to draw meaningful conclusions about their behavior.
Basically, data analysts are responsible for telling you how customers are reacting to the ways your company interacts with them and why.
Data analysts are also increasingly responsible for helping their organizations understand the context of the data they're collecting. For example, it's more important for the sales team to know that sales have rebounded rather than risen 4%. They'll forget the "4%" figure (and it'll be different in a week, anyway).
SEE: Why managing data science projects is not the same as IT projects (TechRepublic)
Day-to-day, data analysts produce reports that detail trends with customer behavior and potential improvement areas for the company. They're also responsible for identifying patterns and trends in the data, and then working with multiple departments inside and outside the company to exploit those trends. They also work with IT teams to set up systems to collect customer and company data.
Data analysts are arguably one of the most important roles at any company, because they provide access to the information that the company needs to make the major business decisions necessary to survive and beat the competition.
Jolene Wiggins: Data analysts, of course, need to have technical know-how and solid math skills, but they also have to be able to interpret the numbers to concisely tell a story. This means visualizing the data, which requires a bit of design and creativity, and explaining the data simply enough that the storyline is clear.
Scott Matteson: What types of skill sets are beneficial for data analysts?
Keelin McDonell: While all data analysts should know tools like Excel, Tableau, PowerBI, and basic programming languages, the importance of soft skills can't be understated.
The data analyst role used to be very technical in nature. But as data becomes even more vital to a company's ability to remain competitive, data analysts are critical for helping the company make better-informed decisions. That means you need to influence people. Predictive models, line charts, and numbers don't do that; stories do.
SEE: How to become a data scientist without getting a Ph.D. (TechRepublic)
Jolene Wiggins: Critical thinking is possibly the most important skill set data analysts should have. Data doesn't always provide straightforward, cause-and-effect answers. Analysts need to be willing to look for alternate explanations and view findings with enough suspicion to dig a little deeper. Their main job is to interpret what data means, which requires them to walk a fine line between simply taking the numbers at face value and overreaching with assumptions.
Scott Matteson: What might a typical day for a data analyst look like?
Keelin McDonell: A huge amount of a data analyst's day is spent reporting, or communicating to various stakeholders what the data means.
A lot of this reporting is ad hoc: Executives make one-off requests for information on how sales have performed in a specific time frame, for example. It's the analyst's job to provide those executives with those insights, which is important, but can also mean they are running from one menial reporting task to another all day.
A lot of that reporting can be automated, freeing analysts for more strategic, high-level, value-added work.
Jolene Wiggins: An analyst's workload could look very different from day-to-day, with a mix of research for internal purposes and client-oriented analyses. They may be working on several different projects at the same time, requiring them to switch gears frequently. This diversity in projects can help spark creativity and overcome roadblocksyou never know when you're going to learn something that can be applied to another unrelated project.
Scott Matteson: How can data analysts present the most meaningful information?
Keelin McDonell: To deal with the overabundance of data that companies collect, data analysts spend hours putting spreadsheets and bar charts together, then share that with sales, marketing, product ,and RevOps teams via dashboards. But the problem with this approach is that you're leaving a lot open to interpretation, which can take up loads of time when trying to make a major business decision.
SEE: Why big data tracking and monitoring is essential to security and optimization (TechRepublic)
When thinking about what information is most meaningful, consider what information your colleagues need to take the next step.
Here are a couple examples:
A salesperson might see that sales have increased four percent, which is inherently good news. But the missing context is that sales have rebounded. Had the salesperson known this, they would have been able to better understand where to allocate resources.
The marketing team needs to know how well their channels are performing and the quality of business they're bringing in, so they know where to increase and decrease the money they're spending.
Marketing and sales teams need to know that their revenue is up. But it's even more important for them to know that the number of licenses sold decreased, but the average deal size increased.
Jolene Wiggins: Analysts need to be able to concisely communicate meaningful data, so visualization, of course, is a great vehicle for doing that. But simply generating a chart or a graph isn't necessarily the most effective storytelling tool. After thinking about what the data actually says, analysts need to present it in a format that allows their audience to follow the logic and grasp the story being told.
Scott Matteson: What is the purpose of a data-based story?
Keelin McDonell: A data-based story gives you crucial information about what's happening at a business and why, in plain English. This helps anyone in the company better understand what the data is telling them and helps turn that into action. It helps data analysts become better data storytellers, too.
SEE: How to create your first Tableau Software data visualization chart (TechRepublic)
A good data story is just a story, the same as any other story you might read. It starts with a hook, is easily digestible and (perhaps most important) memorable. Ultimately, a data story helps everyone get a clearer picture of how the business is doing.
We believe in this approach so much that my colleagues wrote a book about it. It's called Let Your People Be People. In the book, Nate Nichols and Anna Walsh explain how anyone in the business can tell a compelling story and the tremendous impact that a good one can have on your organization.
Jolene Wiggins: Telling stories based on data is becoming increasingly important in our current social landscape because verifiable data marks the difference between opinion and fact. Although there is still some reasonable room for error, data-based stories help remove human bias by quantifying past events to help predict future outcomes. The goal of most writing is to convince the readergreat data helps to strengthen the writer's arguments and makes whatever is being said that much more powerful.
Scott Matteson: What should it contain?
Keelin McDonell: Telling your data-based story within the right context is one thing. But actually having it make sense to the person reading it is another.
Here's what you need to know:
Summarize your story first: Provide a clear summary of your findings by setting the scene. For example, instead of titling a report "Sales Performance Week of April 3," summarize what the performance was like: "The First Week of April Was Top Sales Performers' Most Successful Week of Q2."
Tell your story with words and visualizations: Describe why or how that week was successful with words that explain how that outcome relates to the goal for the initiative or project. Even if the result is normal, that could also mean that the result is above or below your company's goals.
Supplement your story with visualizations: Don't assume that everyone will know what insights you want them to pull out of visuals. Supplement them with words so your audience has a clear understanding of what the charts, graphs, and data are telling them (and what you want them to know).
Include only the details your audience needs: You should provide actionable recommendations for teams to implement; not unimportant, fluffy details. Even if you don't have anything new to report, contextualize your findings. For example, "Even though we launched a new product last month, we have no new leads."
Jolene Wiggins: Data-based stories should refrain from speculation and opinionafter all, correctly analyzed data is meant to represent verifiable facts. Add color to a story with specific anecdotal examples, but remember that one example is not necessarily representative of the whole. The best way to tell a data-based story is to keep it simpledon't let the story get lost in the numbers. When storytellers overwhelm readers with numbers, the message doesn't land. Remember: Use the numbers that are most relevant and necessary to paint an accurate picture.
SEE: How graph databases help analyze complex relationships (TechRepublic)
Bill Hewitt: Data storytelling is an integral function for executives as they demonstrate performance, drive business alignment, and communicate outcomes across their organizations and to executive boards. While there is a finite amount of 'gut instinct' that goes into managing a successful enterprise, qualitative data uncovers important insights that could influence how an organization allocates its budget, positions products within the marketplace, and even evaluates personnel.
This is especially true when discussing the valueor lack thereofof complex IT projects and implementations. Many tech leaders struggle to communicate with executives because they're focused more on output than business outcomes and insights.
For instance, it's not enough to say that Office 365 was rolled out to employees. Instead, IT teams should be able to provide granular insights that demonstrate traction and ROI of their investment, including the percentage of employees using the new applications, whether productivity has increased, and whether the platform has had an adverse performance effect on the entire end-user infrastructure.
Additionally, it is critical to present the "so what" in the context of your business relative to best practices, industry benchmarks and the current business climate. Ultimately, context plus instructive data points help business leaders connect the IT team's efforts with overall business priorities and performance.
Data, Analytics and AI Newsletter Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays Sign up today
Also see | How to convert Ubuntu into a rolling release | | Link: https://www.techrepublic.com/article/how-to-convert-ubuntu-into-a-rolling-release/#ftag=RSS56d97e7 | | Published Date: 2020-08-10 | Have you ever wanted to convert Ubuntu Desktop into a rolling release? Thanks to lead developer, Martin Wimpress, you can. Jack Wallen shows you how.
Image: Jack Wallen
If you're a Ubuntu fan, you know that particular flavor of the Linux desktop and server is a fixed release. What does that mean? Well, there are two types of Linux releases: Fixed and rolling. A fixed release is one wherein only security updates are released on a frequent basis. All other software updates are held back for a fixed period of time. With a rolling release, on the other hand, every piece of software is immediately updated, as soon as the developers release a new version of a specific package.
SEE: How to become a software engineer: A cheat sheet (TechRepublic)
With fixed distributions, you get releases like Ubuntu 20.04.1. That .1 means there have been numerous updates to the installed packages. With rolling releases those point upgrades aren't necessary, as the software is in a constant state of being upgraded.
The pros and cons are obvious: With a fixed release, software has more time for vetting, so it might tend to be more stable. With a rolling release, your distribution always has the latest software.
Thing is, with Ubuntu, you don't have a choice, as it is a fixed release.
Unless you follow Ubuntu desktop lead developer, Martin Wimpress, who has created an easy means to turn Ubuntu into a rolling release. If you're a developer, this might be a great way to work with the Ubuntu desktop and have the latest, greatest software installed, without having to jump through a bunch of hoops.
Let me channel Wimpress and show you how this is done.
What you'll need
The one caveat to making this transition is that it does not support LTS releases. You must be using a development release. Because of that, I will demonstrate using the daily version of Ubuntu 20.10 (Groovy Gorilla). Do note: This process is only designed for the desktop version of Ubuntu and will not work on the server (unless you've installed a desktop environment on your server).
How to install the necessary packages
The first thing you'll need to do (if you're using the Desktop version of daily) is to install git. Open a terminal window and issue the command:
sudo apt-get install git -y
Once that installs, clone the Rolling Rhino directory with the command:
git clone https://github.com/wimpysworld/rolling-rhino
Change into the newly created directory with the command:
cd rolling-rhino
Run the included installation script with the command:
sudo ./rolling-rhino
You will be asked if you are sure you want to start tracking the devel series. Answer yes to this question, and the process will begin.
This script will take some time to finish its task (2-5 minutes), so you probably shouldn't mindlessly stare at the monitor. Instead, go off and take care of some other admin task that's on your list. When you come back, you'll see the success message (Figure A).
Figure A
Reboot the machine, and enjoy the rolling goodness of Rhino.
That's all there is to converting the fixed daily Ubuntu Desktop release into a rolling release. Do note this is a development series, so you probably won't want to use it as a production machine (at least not until you use it enough to discern if it is stable enough to meet your demands).
Open Source Weekly Newsletter You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays Sign up today
Also see |
|
|
.... 13
14
15
16
17
18
19
20
21
.... |
|
|