Ducks In a Row Consulting Services Inc has proven data competencies in:

  • UNDERSTANDING SCHEMAS, DATA MODELS, MASTER SYSTEMS AND THEIR DATA FLOWS

Ducks in a Row Data consultants have been trained in IDEF0, ERD diagrams, data modeling, SQL and SQL scripting and can quickly analyze your data flows and recommend modifications for new solutions as needed. 

  • CREATING AUTOMATED PROGRAMS THAT LOAD DISPARATE DATA INTO TARGETED SCHEMAS, ENABLING DAILY DATA LOADS

Data consultants can write scripts for data cleansing, ETL, data loads and translating data from one schema to another as needed.  This includes writing programs in PLSQL, VB, VBA, Excel, Word or .Net that scan the external data and translate into relational database records in targeted schemas.  Creating daily uploads into temporary tables and creating SQL scripts that map and maintain data into the new schema.

  • UPDATING/CREATING PHYSICAL AND LOGICAL DATA MODELS THAT REPRESENT DATA REQUIREMENTS

Data consultants may use data modeling tools to create or update entities using direct SQL commands or SQL scripts to create attributes and relationships from proposed data models that best represent the business requirements and accommodates future growth

  • DATA SCIENCE TECHNIQUES

Data consultants draw from a masters in engineering with concentration in computational thermal and fluid sciences, developed skills manipulating large matrices of data.  Currently exploring informational seminars and textbooks to learn data science techniques, statistical learning for businesses: statistics, data mining, predictive modeling techniques, R and reviewing MATLAB programs from prior research

  • CREATING DASHBOARDS AND REPORTING FOR BUSINESS USERS

Data consultants can create automated dashboards in Excel (pivot tables, statistical analysis, adv functions, vlookups) and Crystal Reports or custom web page reports to evaluate trends, display gap analysis results or perform custom analyses. 

  • LOAD/STRESS TESTING

Data consultant may perform stress tests in a test environment to assure no data bottlenecks exist with the implemented solution.  If unsatisfactory, will revise data extraction technique or streamline the data model until bottlenecks are removed.