data centre infrastructure jobs opportunities

Data Center Infrastructure Engineer

Data Center Infrastructure Engineer

Job Summary The SDS (Strategic Data Solutions) Sr. Data Center Engineer is responsible for overseeing operation, maintenance and support of SDS managed server rooms and colocation facilities located within Americas, Europe and APAC regions. The function will include the management of Data Center related fixtures in the fulfillment of service level agreements in accordance with standards established by SDS organization. Description Candidate will be responsible for managing rack & stack activities to cover a variety of network and server components inside SDS data centers. Execute projects in the deployment of data center services that will include cable infrastructure and RFID elements. Manage DC infrastructure in accordance to SDS guidelines. This will include server room audits, security inspections and the review of operational procedures. Assist in the investigation of DC incidents to improve uptime and stability of the environment. Proven track record in remote vendor management for infrastructure expansion, enforce compliance with technical specifications and quality standards. Possess good knowledge in capacity planning in space, power and cooling elements. Work experience in data center infrastructure management tools such as OpenDCIM, nlyte, iTracs,, etc. Provide operational reports to functional leaders within the SDS organization. Experience in using Visio, Microsoft Office or similar tools for day-to-day job functions and productivity. Education Bachelor degree in IT related disciplines. Certification in data Center operations or data center related certification. e.g. CDOM, CDFOM, CDCE, CDCS preferred. Project management certification (PMP) is a plus
Sacramento
Senior Data Infrastructure Engineer

Senior Data Infrastructure Engineer

Job Summary In this role, you’ll be working very closely with a small team of engineers and statisticians to design, build, and maintain systems that enable rapid analysis of large datasets. Description We’re seeking candidates who are confident in the systems they build, but humble and cognizant of the limitations of their software and infrastructure. This role also requires great communication skills, as you’ll contribute to functional specs and design documents to describe the systems you build to coworkers, other teams, and those who join after you. You might ship bugs – but we hope you also employ strategies to reduce risk through thoughtful design, unit + integration tests, stress tests, CI, instrumentation, and monitoring. Education BS/MS CS or equivalent experience. Additional Requirements We believe that effective systems design should be sympathetic to the underlying hardware. We hope you have some experience with: Storage fundamentals (disk types and drive layouts, random and sequential IO, compaction) Compute fundamentals (concurrency models, distributed single-threaded versus coordinated multithreaded scheduling, synchronous and asynchronous IO) Networking fundamentals (data locality, datacenter network layouts, multi-datacenter systems design) You might have specific experience with Apache projects like: HDFS or other distributed file systems HBase, Cassandra, or similar distributed databases Kafka or another distributed replicated log ZooKeeper or a similar coordination system Mesos, YARN, or other resource allocation and scheduling systems We also hope you have experience with (or strong interest in learning) modern Java and Python. We don’t expect you’re an expert in everything described above, but we do hope you have strong experience in a few areas, and enough curiosity + desire to become skilled in the rest. If this position sounds like a good match for your interests and skills, please consider applying. You’ve found a unique team.
Santa Clara
Data Infrastructure Engineer

Data Infrastructure Engineer

Job Summary Maps Evaluation Metrics is responsible for defining, implementing and measuring actionable metrics summarizing the quality of algorithms, services and data. We achieve this by mining massive amount of rich data including human judgements, ground truth, and user feedback logs. We primarily work on Maps but our platform is used by other applications like Siri, iTunes and News. As a Data Infrastructure Engineer, you will be working with some of the most unique and interesting data sets in the world including geo spatial data, probe, search logs, traffic data, human judgements and A/B experiments data. You will partner with data scientists and engineers to acquire valuable signals on where and how we have the most opportunity to improve user experience for Apple customers around the world. You will build large scale data pipelines and end-to-end analytics solutions to transform rich data at Apple scale into actionable insights that directly impact customers. As a member of a small and dynamic team, you will have significant responsibility and influence in shaping all parts of the data platform. Description Design and implement frameworks to manage complex workflows and monitor data quality Design, build and deploy ETL pipelines that are efficient, reliable and easy to operate Research and build efficient and scalable data storage and retrieval systems that enable interactive reporting on high dimensional data Research and build the next generation data dashboard and visualization solution Build libraries and frameworks to empower data scientists to effectively work with our data products Education BA, MS or PhD in Statistics, Computer Science, or other quantitative fields.
Santa Clara
SDS - Infrastructure Project Manager

SDS - Infrastructure Project Manager

Job Summary Apple’s Strategic Data Solutions (SDS) Team is looking for an experienced IT Infrastructure Project Leader who can partner with our group of high performing technical experts in addressing complex business problems driving, coordinating, facilitating and managing projects. The IT Infrastructure Project Manager will work with cross-functional and cross-organizational teams to understand, augment, and oversee technical projects related to setup of server rooms and deployment of hardware consisting of network devices, servers, SAN and database systems that are used for the global manufacturing business. This role requires enterprise datacenter operations experience, having managed projects in one or more of the following areas: server room moves, migrations, consolidations, server refreshes and virtualization. Description Key Responsibilities and Activities The Infrastructure Project Manager must be able to lead project teams and communicate effectively to varying levels both inside and outside the SDS organization. The candidate must be capable of interacting with employees at all levels of the organization and possess the judgment and experience to plan and accomplish goals in an environment with multiple and, at times, competing priorities. The candidate must be able to lead project teams with the highest level of proactive management and IT leadership skills to resolve project issues and ensure that project objectives and requirements are met. These attributes will be used in project planning, problem solving and root cause analysis of issues that arise during implementations. Project-Based Responsibilities Must be able to manage the successful implementation and rollout of moderate to extremely complex infrastructure projects Must be able to partner with Technical Leads and the business line to create work breakdown structures for the project Must be able to take high-level directives and lead the effort of developing start-to-finish project plans based on internal project management framework & lifecycle processes Must be able to serve as the primary liaison between the business line, operations, and the technical areas throughout the entire project lifecycle Must be able to comply with proper Project Life Cycle methodology including all required and conditional documentation Must be able to ensure all required enterprise process deliverables are completed Must be able to maintain proper tasks along with level of effort and timelines assigned to individual project team members, monitor project issues, risks and proactively escalate when appropriate Must be able to drive project team meetings, create meeting agendas and publish meeting minutes Must be able to prepare and present status reports and other project summary information for business leaders, executive management and end users Must be able to follow the change management processes and procedures and influence change where necessary Must be able to work on confidential projects and maintain confidentiality Education Undergraduate degree or equivalent technical degree
Sacramento
Senior Software Engineer - Cloud Infrastructure

Senior Software Engineer - Cloud Infrastructure

Uber’s Cloud team is seeking experienced software engineers to help build the future of urban transportation on public cloud infrastructure.  Sound interesting? Read on. Our job is to build a next generation, flexible capacity platform to propel Uber into the next 10x-100x growth levels - which will come pretty soon, considering that we’re doubling in size every six months. The Cloud team is building systems for consumption by all the other infrastructure and engineering teams at Uber. We’re setting best practices and helping other teams architect better solutions, and we’re not afraid to get our hands dirty. You’ll get to experiment with a range of cutting edge technologies that will define our next generation infrastructure platform and increase the productivity of all engineers at Uber. HERE ARE THE KINDS OF SKILLS WE'RE LOOKING FOR: Excellent Java (or Go) development and OO design skills.Strong interest (and ideally experience) in building distributed systems with scalability and reliability in mind. If your architecture can’t scale to 100x at 99.99% availability, it won’t work for Uber.Product and customer centric mindset. The platform we are building is going to be leveraged by hundreds of Uber engineers across many teams.Prior experience working with AWS EC2, VPC, S3 or other public cloud servicesExperience or interest in building clean, public facing REST/Thrift APIs PERKS: Employees are given Uber credits every month.The rare opportunity to change the world such that everyone around you is using the product you built. We’re not just another social web app, we’re moving real people and assets and reinventing transportation and logistics globally.Sharp, motivated co-workers in a fun office environment. BENEFITS (U.S.) 401(k) plan, gym reimbursement, nine paid company holidays.Full medical/dental/vision package to fit your needs.Unlimited vacation policy; work hard and take time when you need it. We're bringing Uber to every major city in the world. We need brains and passion to make it happen and to make it happen in style. Be sure to check out the Uber Engineering Blog to learn more about the team.
San Francisco
Data Scientist

Data Scientist

Description   Data Scientist   About Hitachi Consulting Corporation Hitachi Consulting is the global management consulting and IT services business of Hitachi Ltd., a global technology leader and a catalyst of sustainable societal change. In that same spirit – and building on its technology heritage – Hitachi Consulting is a catalyst for positive business change, propelling companies ahead by enabling superior operational performance. Working within their existing processes and focusing on targeted functional challenges, we help our clients respond to dynamic global change with insight and agility. Our unique approach delivers measurable, sustainable business results and a better consulting experience.   About Hitachi, Ltd. Hitachi, Ltd. (www.hitachi.com), headquartered in Tokyo, Japan, delivers innovations that answer society's challenges with our talented team and proven experience in global markets. The company's consolidated revenues for fiscal 2014 (ended March 31, 2015) totaled ¥9,761 billion (US$81.3 billion). Hitachi is focusing more than ever on the Social Innovation Business, which includes power & infrastructure systems, information & telecommunication systems, construction machinery, high functional materials & components, automotive systems, healthcare and others.   Purpose of Position    The Data Scientist designs, builds and maintains analytical models that mine large sets of structured, semi-structured and unstructured data while looking for unique insights and correlations that not evident through traditional Data Warehouse or Business Intelligence techniques. The Data Scientist will leverage a deep understanding of math, applied statistics, engineering, and software development to build complex models to identify business trends and predict outcomes. The Data Scientist will work in a highly collaborative team environment and be responsible for the overall quality and accuracy of the resulting model.  The position will work closely with the U.S. Insight & Analytics staff and business leaders to create actionable machine-learning products that support strategic, tactical and operational decisions.  This individual will be responsible for the design, implementation, evaluation and analysis of data and reporting mechanisms to bring descriptive clarity to Client business metrics and predictive power to Client big-data intellectual propertQualifications   Expected Duties Designs, develops, and implements algorithmic solutions for time series, streaming data, and big data at rest Works with functional consulting teams across business domains, including Communications, Media, Entertainment, Oil & Gas, Manufacturing, Retailing, and Distribution. Develops reusable, maintainable and effective predictive and behavioral models through rapid, iterative, Agile development using machine learning techniques. Contributes to Hitachi Solutions through a full Agile Software Development Life Cycle methodology. Works closely with industry Subject Matter Experts, Data Architects and Solution Architects to understand business problems, identify data sources, develop analyzers and predictive models and configure visualization software to communicate results. Applies intellectual curiosity and deep analytical thinking to mine large data sets for hidden gems of insight and correlation. Actively seeks new methodologies, algorithms, tools and technologies to improve existing models and build new state-of-the-art models. Education and Experience Bachelor’s degree in a quantitative field such as Mathematics, Physics, Physical Chemistry, Statistics, Actuarial Science, Engineering, Economics, or related field from a four-year college or university.  Master’s degree or higher preferred. 5 to 10 years of experience in an industrial or government scientific or engineering laboratory, underwriting firm, risk-management firm, or sell-side financial house. Fluency with analytical software including R, Python, Stata, MatLab, SAS, and/or SPSS Extensive experience applying machine learning algorithms, predictive modeling, data mining, and statistical analysis to solve business problems. Dexterity and nimbleness with the Microsoft Office/VBA stack  Desired Skills and Abilities Demonstrated in-depth knowledge of statistics, relational databases, object-oriented programming, and a statistical-programming environment. Working knowledge of ETL tools such as Pentaho, Informatica, or LavaStorm Excellent verbal, written and
Dallas
Public Key Infrastructure (PKI) Product Owner

Public Key Infrastructure (PKI) Product Owner

The Certificate and Key Engineering team is looking for a proven leader ready to join forces with technical partners, business partners and stakeholders across the enterprise to lead the Public Key Infrastructue (PKI) DevOps team in support of its mission. As the PKI Product Owner, you will drive a culture of collaboration across the DevOps team and extended support teams in order to consistently deliver on tactical goals in support of the broader InfoSec strategic objectives. The product owner is responsible for ensuring the health and integrity of the Public Key Infrastructure service by ensuring enforcement of documented procedures and contributing to the enrichment of its policies and practices. Product Owner will also engage vendors, assist with finance planning, ensures the program stays within budget, track and manage program risks Qualifications Minimum Qualifications: • BA/BS degree in Computer Science, Information (Systems) Technology or a related discipline (Master’s degree preferred). • Minimum 7 - 10 years' experience in IT Operations. • 5+ years' experience of managing complex IT projects • Experience with client, server, and mobile security across multiple operating systems • Experience with Security Governance, Risk and Incident Management Additional Profile Characteristics Preferred: • Proven leadership, solid communication skills and understanding of IT operations • Knowledge of techniques for planning, monitoring and managing the delivery of a service • Ability to inspire teams and command respect across organizations to create collaborative community • Excellent problem-solving skills to quickly resolve short challenges and adjust for long term improvements • Able to effectively manage priorities from stakeholders corporate wide. • Strong interpersonal skills to manage relationships with team, management, customers and vendors. • Understanding of digital certificates and their use for signing and encrypting. • Familiarity with ISO 27001/27002, ITIL and COBIT frameworks • Security certifications a plus CISSP, CISM, etc. Inside this Business Group Intel's Information Technology Group (IT) designs, deploys and supports the information technology architecture and hardware/software applications for Intel. This includes the LAN, WAN, telephony, data centers, client PCs, backup and restore, and enterprise applications. IT is also responsible for e-Commerce development, data hosting and delivery of Web content and services. Posting Statement. Intel prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Phoenix
Senior Software Engineer, Hadoop Analytics  Infrastructure

Senior Software Engineer, Hadoop Analytics Infrastructure

Uber is currently looking for engineers with expertise and passion for building large scale data analytics systems. The Hadoop Analytics and Infrastructure team is part of the Data Infrastructure team at Uber. Based in Palo Alto, the team is responsible for building the interactive and batch querying systems, advanced data processing platforms and the underlying storage and resource management infrastructure. Our mission is to design, develop, and manage world-class big data systems which are highly scalable, available, fault tolerant, secure, powerful and efficient to empower data driven decisions for every group within Uber; from data scientists to city operations teams, from product engineers to marketing. Some of the products that we power include driver/rider matching, ETA calculations, Image recognition for Maps and autonomous vehicles, secure data access, adhoc exploration of city level patterns etc. SOME OF THE CHALLENGING PROBLEMS WE ARE SOLVING: Provide interactive SQL access to 10s of PB of data with a few seconds of latencyUnified Scheduler for Batch and Online workloads to globally optimize resources Data Security with Authentication, Authorization, and Auditing mechanismsInteractive workbench to boost productivity of Uber’s Data Scientists HERE IS WHAT WE'RE LOOKING FOR: Be customer obsessed and ability to translate customer and technical requirements into detailed architecture and designSelf-motivated learner with strong systems level coding, testing, debugging, code review and design review skillsExperience with distributed systems, large scale data analytics, query optimization and execution, highly available/fault tolerant systems, replicated data storage, and operating complex services running in the on-prem or cloud are all plusesPassionate about mentoring other engineers, fostering our fast paced culture in them, and helping build a fast-growing impactful teamUnder the hood experience with some of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Hive, Spark, Docker/Mesos, and Tez. Presto is a plus. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon RedShift, Kubernetes, Mesos etc. is also a plus. PERKS: Employees are given Uber credits every month.The rare opportunity to change the world such that everyone around you is using the product you built. We’re not just another social web app, we’re moving real people and assets and reinventing transportation and logistics globally.Sharp, motivated co-workers in a fun office environment. BENEFITS (U.S.): 401(k) plan, gym reimbursement, nine paid company holidays.Full medical/dental/vision package to fit your needs.Unlimited vacation policy; work hard and take time when you need it. We're bringing Uber to every major city in the world. We need brains and passion to make it happen and to make it happen in style
Palo Alto
Infrastructure Cloud Architect (AWS Dev Ops)

Infrastructure Cloud Architect (AWS Dev Ops)

Description   Experian Consumer Services – Careers That Define “The Next Big (Data) Thing” for Consumers What could be more exciting – personally and professionally – than being part of a “disruptive” business? Consider taking your career to the next level by joining the Leader that continues to disrupt the competition. As the “disruptor” and market leader we pride ourselves on building new markets, leading the pack through continuous evolution and innovation. It’s a position Experian Consumer Services has enjoyed for more than a decade and we’re always looking for the talent that can help expand that lead. When you’re the leader, it’s always urgent, important and market-changing. We think that defines the true “disruptive” business. Join us and create some chaos for the competition. The AWS Dev Ops Architect is a hands-on technical position responsible for architecting, designing and implementing enhancements, extensions, and leading the technical direction of our “Platform as a Service” solution in a public cloud-computing environment. Based on a “developer self-service model”, our platform automates: AWS resource provisioning and management (based on immutable compute resources) Build pipeline supporting Continuous Delivery, including support for canary and blue green releases Container based delivery (Docker) Micro-service support (service registry, service-to-service authentication) Instrumentation, monitoring, notification, and alerting Data pipeline from transaction support (Dynamo) to BI (RedShift) High Availability and Disaster Recovery Design Security and data encryption framework Responsibilities: Provide technical directions and leadership skills to others on the team the guiding principal of infrastructure as code Work directly with the management team to coordinate and determine the direction and strategy the platform Integrate Business requirements and ensure consistency across business units Collaborate with engineering teams to identify Platform needs and issues Architect and define build pipeline architectures in collaboration with Product Architects and the DevOps team Architect, design and deploy cloud platform capabilities using AWS (full stack – network, load balancing, DNS, security, databases, compute, and a range of managed services) Architect and create monitoring and alerting capabilities Architect and implement infrastructure capabilities in an automated cloud world – such as backups, security tools, IAM, monitoring, etc. Perform advanced technical troubleshooting for cloud and e-commerce environments Design disaster recovery (multi-region), backup/BCP, and multi-cloud strategies for the platform Create automated tools and processes for development teams to self-service day-to-day tasks Role requires in-office weekly support to remain engaged with architecture and engineering changes  Qualifications   Education and Experience: 10+ years experience in an infrastructure role, focused on build pipeline or infrastructure management with at least 5 years supporting Dev Ops concepts Production experience with public cloud (AWS, Google or Azure – AWS strongly preferred) Fluency in Python or other programming or scripting language Proficiency in software and systems design and architecture  Experience with a variety of open source technologies and tools in support of cross-team collaboration  Bachelor of Science or comparable experience Qualifications Required: Strong knowledge of the DevOps tool chain on the AWS Linux platform: Jenkins, Nexus OSS, python, boto, troposphere, .net/java, ansible, puppet, chef, code pipeline, confluence, git, jasmine, chocolaty, cloudformation, etc. Strong experience automating the creation of most Amazon Web services (Cloudformation, Ec2, Lambda, Cloudfront, Autoscaling, Cloudwatch, Elasticache, ELB,  Experience with server-less and container based solutions including Docker, ECR, ECS and Lambda Experience with automated testing tools such as Selenium, Cucumber or Server Spec Experience deploying automation solutions in a public cloud environment such as AWS Previous working experience with compliance governing bodies similar to PCI and HIPAA Operationally savvy, experience with monitoring, alerting, and analyzing system metrics to identify problems and understand system behavior Ability to work in a fast paced, e-commerce environment Has a passion for change and doing things different; innovation Strong
Costa Mesa
Sr. Engineering Program Manager - Deployments, Platform Infrastructure

Sr. Engineering Program Manager - Deployments, Platform Infrastructure

Job Summary Come help us build the next generation cloud platform to support internet services across Apple. Our engineering project management team partners closely with the platform network and server engineering teams to deploy the physical capacity that powers the platform, driving some of our most exciting Apple services, including iCloud, Maps, iTunes, and more. Our platform infrastructure ensures that Apple's services are reliable, scalable, fast, cost effective, and secure. We utilize both open source and in-house technologies to provide internal Apple developers with the best possible platform. In this role, you will have the unique opportunity to own delivery for the physical capacity of some of the world's largest-scale cloud services. Description We are looking for an expert project manager who thrives in a fast-paced environment, and is adept at turning ambiguity into action. We're looking for people who enjoy diving deep into the technology, with a proven track record of shipping complex, cross-functional projects under demanding timelines. This is a hands-on position where you will be expected to drive all aspects of deployment, including requirements definition, vendor qualification, financial analysis, capacity planning, technical design discussions and documentation, testing, working with partners in data centers, deploying large scale systems, and operating and monitoring services with the expectation of high reliability and high availability. This is not a task-based job; you will be responsible for successful outcomes and delivery. Strong fundamentals are a must, but you have more. You take responsibility; you feel a personal stake in the product you ship; you communicate responsibilities and scope clearly; you have exceptional attention to detail; you value integrity; you manage risk; you need to know how things work; you work for the success of the entire team; you thrive in uncertainty and strive to bring order to it; you have deep wisdom and judgement; you keep your eye on the ball; you build strong relationships; you are aware of politics but do not get mired in them; you are constantly looking to improve yourself and the team. Education Prefer a BS/MS in CS or a similar technical field.
Santa Clara
Data Scientist

Data Scientist

Job Summary Are you interested in applying your quantitative skills in a fast paced tech environment? If so, then this is the job for you. The Worldwide Sales and Operations Advanced Analytics team is looking for a top research scientist in optimization and stochastic systems to design and create advanced analytics models for strategic and tactical decision-making as well as drive the development of our future supply-chain infrastructure and planning processes. Description The individual will be part of a small team of experts responsible for the optimization of our current supply chain and designing future networks. You will be instrumental in working closely with business stakeholders, research colleagues, finance and IT groups in incorporating the essential tradeoffs within the models and take an active role in effectively communicating the recommendations to senior management. Education PhD in Computer Science, Statistics, Applied Math, Operations Research or a related field Additional Requirements Knowledge of stochastic systems analysis Familiarity with machine learning concepts 4 years plus hands-on experience in optimization modeling, simulation and analysis. Strong critical thinking and problem solving ability. Programming skills with at least one object oriented language (e.g. Java) and one scripting language (e.g Python). Experience in using statistical analysis software packages (e.g. R) Strong communication and presentation skills.
Santa Clara
Data Architect

Data Architect

Job Summary Apple Maps is the result of hundreds of pipelines and thousands of individual data transformations. Seeking a high energy, experienced Data Architect who has experience in RDBMS, No-SQL and big data infrastructure. Expected to have hands-on experience on entire the database infrastructure stack with storage and capacity planning experience, data analysis, data migration, data warehousing, database design, DB performance tuning, database monitoring tools and optimization with other key aspects of distributed repository technology. Description Own the high-level vision of data systems within Apple Partner with engineering teams to identify their data storage needs Design and develop Architecture for data services ecosystem spanning Relational, Columnar, NoSQL, In-Memory and Big Data technologies. Develops strategies for data acquisitions, archive recovery and implementation of a database. Design data models for mission critical and high volume data management, real-time and distributed data process aligning with the business requirements. Implementing database optimizations techniques for the database bottlenecks Implementing centralized database monitoring Tools, backup strategies and recovery tools. Automate Database Upgrades and implement no downtime migrations. Able to recommend a DB technology based on the use-case defined with all logistics. Implement Database Virtualization Techniques. Promote and develop data architecture best practices, guidelines, procedures and repeatable and scalable frameworks. Conduct data architecture assessments during feasibility, dimensioning, technical reviews and provide in depth architectural support during the various phases of the program/project as required. Responsible for data profiling, data analysis, data specification, data flow mappings, and business logic documentation associated with new or modified product data capture requirements Provide guidance and mentor the technical and business teams by providing solutions, recommendations and documentation of use cases for continuous improvement. Suggest optimal data schema and storage options based on the context of the problem being solved Able to quickly identify and resolve data integration issues (data validation and data cleansing) using various data architecture concepts and techniques. Deliver solutions in a complex production environment with concurrent considerations such as: 1.Phased migration, consolidation and parallel production states of disparate legacy data environment 2. Designs, a multi-platform, multi-tenant systems environment, high volume structured and unstructured data ingestion Suggest optimal data schema and storage options based on the context of the problem being solved Identify, scope, and drive resolution to data issues such as sharding, retention issues, purging, etc. Education BS in Computer Science or relevant industry experience is required Additional Requirements Capable of delivering on multiple competing priorities with little supervision. Willing to strive for the ideal design process while still working within the framework of the organization, project, and environment. Strong knowledge and experience with Agile/Scrum methodology and iterative practices in a service delivery lifecycle is a PLUS. Strong estimating, planning skills and proven ability as a problem-solver. Ability and desire to thrive in a proactive, high-pressure, client-services environment. Experience with machine learning, statistical techniques, and information retrieval.
Santa Clara
Inside Sales Rep

Inside Sales Rep

Salary (not posted), INSIDE SALES REPRESENTATIVE Data Age Business Systems, Inc., the pre-eminent global industry leader in financial transaction software, is currently seeking top talent to join our Sales Team. Do you have superior communication and presentation skills? Do you exhibit a self-motivated and proactive work ethic? Do you possess a customer-centric focus, a passion for technology, and are dedicated to winning the deal? Are you a bright and motivated individual with Inside Sales experience? If you answered yes to the above questions then this position is an excellent opportunity for you. Data Age Business Systems offers a competitive base salary and uncapped commissions along with a complete benefit package. The ideal candidate will develop relationships with Clients and see deals through completion in order to meet aggressive sales goals. This position is based at our corporate office in Clearwater, FL.Duties and Responsibilities include:Learn, understand and sell Company products and services that are consistent with the Company objectives.Meet assigned monthly revenue goals.Build a healthy pipeline of prospective Clients.Be able to advise Clients to generate future selling opportunities.Maintain records of all account activity in Company database.Prepare reports and records covering activities promptly and properly, as the Company deems necessary.Attend trade shows as necessary.Knowledge, Skills and Abilities Required to Successfully Perform Position:High School Diploma along with a minimum of two years related experience and/or training.Prior inside and/or retail sales experience a plus.Exceptional customer service skills.Excellent verbal and written communication, problem solving, interpersonal and time management skills.Maintain professional appearance and attitude in all settings.Benefit Package includes Paid Time Off, Paid Holidays, Medical Insurance, Life Insurance, 401(k) and optional Dental and Vision Insurance. Candidates will be required to complete the company’s background check and drug screening processes. Learn more about Data Age Business Systems, Inc. and our products:www.dataage.com
Clearwater
Data Scientist

Data Scientist

What You'll Do • You will be the subject matter expert for Real-time data mining and data analytics for application and infrastructure in our Cloud Customer Care services. • Work with business unit stakeholders to understand high-cost areas and use cases that could be improved via prediction and optimization • Conduct detailed technology research on industry and vendor solutions for analyzing data and implement new solutions to identify those which are the most promising • Provide thought leadership and collaborate with cross-functional engineering teams to streamline and/or improve adoption of analytics into their projects • Work within an agile development environment with other architects and product owners to scope, develop and deliver world-class software solutions • Provide mentoring, coaching to team in order to grow talent to next level • Motivated self-starter who is highly results driven, takes enormous pride in your work and demonstrates a high degree of enthusiasm for engineering excellence • Must have the ability to think at a high level about systems and articulate key trade-offs in design Who You'll Work With Cisco’s CCBU (Customer Care Business Unit) is an industry leader in Customer Care solutions and is growing its Cloud SaaS engineering teams. The teams work in a high performing agile environment with the latest in cloud development technologies and practices, including continuous delivery, continuous integration, test-driven development and PaaS based development. Articulate the next generation of Customer Care SaaS Solutions with us that will fundamentally change the way companies interact with their customers and will transform this multi-billion-dollar industry. Come, envision, influence and implement the future of customer care with us Who You Are Minimum Skills: • BS/MS in CS/EE • 12+ years of experience required in solving analytics problems using quantitative approaches or related area • Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations • Familiarity and practical experience in the areas of statistical learning and exploratory data analysis • Experience evaluating and recommending appropriate open source or commercial analytics technology, toolsets and approaches for commercial application. • Hands-on design with data analytics solutions, including big data and high performance real-time attributes. • Hands-on experience with data exploration and visualization tools (e.g. Tableau, etc…) • Hands-on experience executing data mining & analytics functionality (e.g. R, MapReduce, etc…) • Cluster and analyze large amounts of customer generated content and process data in large-scale environments such as Amazon EC2, Storm, Hadoop or Spark. • Strong data extraction and processing, using MapReduce, Pig, and/or Hive • Strong working knowledge of data privacy and security, including transport, storage, and encryption. • Exposure/ experience to tools & Infrastructure such as Jenkins, GITHUB, Eclipse,… Preferred Skills: • Working experience with Cloud design, development • Expertise in DevOps, continuous integration, and continuous delivery. • Languages & Technologies: Java, JavaScript … • Understanding of Operations concepts such as alerting, monitoring, logging, and health checks • Working knowledge of Unix/Linux systems preferably including Docker. • Experience of using and/or designing RESTful APIs, Spring, Hibernate, … • Experience working on Voice and Collaboration applications, experience with Contact Center Applications. • Internationalization/localization design considerations Why Cisco We connect everything: people, processes, data, and things. We innovate everywhere, taking bold risks to shape the technologies that give us smart cities, connected cars, and handheld hospitals. And we do it in style with unique personalities who aren’t afraid to change the way the world works, lives, plays and learns. We are thought leaders, tech geeks, pop culture aficionados, and we even have a few purple haired rock stars. We celebrate the creativity and diversity that fuels our innovation. We are dreamers and we are doers. We Are Cisco. *LI-MW1
Boxborough, MA, US