NIE-PDB: Advanced Database Systems
Deadlines |
Exams |
Lectures & Labs |
Requirements |
Assignments |
Topics |
Checklist
Basic Information
- Lecturer and tutor: Martin Svoboda
- Course webpage
- Schedule:
- Lectures: Tuesday 16:15 - 17:45 (T9:302)
- Labs: Tuesday 18:00 - 19:30 (T9:350) (formally odd weeks only but exceptions exist)
- Table with points from practical classes, homework assignments and exam tests
Exam Dates
- Tuesday 13. 12. 2022: 16:15 - 18:30 (T9:302)
- Tuesday 3. 1. 2023: 16:45 - 19:00 (TH:A-942)
- Tuesday 10. 1. 2023: 11:00 - 13:15 (T9:105)
- Tuesday 24. 1. 2023: 11:00 - 13:15 (T9:105)
- Tuesday 31. 1. 2023: 11:00 - 13:15 (T9:105)
- There will be no additional exam dates
Homework Deadlines
- 00 - Topic selection: Tuesday 4. 10. 2022 until 23:59
- 01 - XQuery: Monday 10. 10. 2022 until 23:59
- 02 - SPARQL: Monday 17. 10. 2022 until 23:59
- 03 - MapReduce: Monday 31. 10. 2022 until 23:59
- 04 - RiakKV: Monday 7. 11. 2022 until 23:59
- 05 - Cassandra: Monday 14. 11. 2022 until 23:59
- 06 - MongoDB: Monday 28. 11. 2022 until 23:59
- 07 - Neo4j: Monday 5. 12. 2022 until 23:59
Lectures & Labs
- 20. 09. 2022
- 16:15 - Lecture - 01 - Introduction: Big Data, NoSQL Databases - PDF
- 27. 09. 2022
- 16:15 - Lecture - 02 - Data Formats: XML, JSON, BSON, RDF - PDF
- 18:00 - Lab - 00 - Organization - PDF
- 04. 10. 2022
- 16:15 - Lecture - 03 - XML Databases: XPath, XQuery - PDF
- 18:00 - Lab - 01 - XQuery - PDF
- Homework assignment - 01 - XQuery: deadline Monday 10. 10. 2022 until 23:59
- 11. 10. 2022
- 16:15 - Lecture - 04 - RDF Stores: SPARQL - PDF
- 18:00 - Lab - 02 - SPARQL - PDF
- Homework assignment - 02 - SPARQL: deadline Monday 17. 10. 2022 until 23:59
- 18. 10. 2022
- 16:15 - Lecture - 05 - Basic Principles: Scaling, Sharding, Replication, CAP Theorem, Consistency - PDF
- 25. 10. 2022
- 16:15 - Lecture - 06 - Apache Hadoop: MapReduce, HDFS - PDF
- 18:00 - Lab - 03 - MapReduce - PDF
- Homework assignment - 03 - MapReduce: deadline Monday 31. 10. 2022 until 23:59
- 01. 11. 2022
- 16:15 - Lecture - 07 - Key-Value Stores: RiakKV - PDF
- Homework assignment - 04 - RiakKV: deadline Monday 7. 11. 2022 until 23:59
- 08. 11. 2022
- 16:15 - Lecture - 08 - Wide Column Stores: Cassandra, CQL - PDF
- Homework assignment - 05 - Cassandra: deadline Monday 14. 11. 2022 until 23:59
- 15. 11. 2022 - Not scheduled (classes are held as on an odd week Friday)
- 22. 11. 2022
- 16:15 - Lecture - 09 - Document Databases: MongoDB - PDF
- 18:00 - Lab - 04 - MongoDB - PDF
- Homework assignment - 06 - MongoDB: deadline Monday 28. 11. 2022 until 23:59
- 29. 11. 2022
- 16:15 - Lecture - 10 - Graph Databases: Neo4j, Cypher - PDF
- 18:00 - Lab - 05 - Neo4j - PDF
- Homework assignment - 07 - Neo4j: deadline Monday 5. 12. 2022 until 23:59
- 06. 12. 2022
- 16:15 - Lecture - 11 - SQL Query Evaluation I: External Sort, Nested Loops Join - PDF
- 18:00 - Lecture - 12 - SQL Query Evaluation II: Sort-Merge Join, Query Evaluation and Optimization
- 13. 12. 2022 - Not scheduled
Formal Requirements
- Attendance during lectures and practical classes is recommended but not compulsory
- Altogether 7 individual homework assignments will be given during the semester
- Everyone must choose their distinct topic, not later than during the XQuery practical class
- This topic must be reported to and explicitly accepted by the lecturer in advance
- Possible topics could be: library, cinema, cookbook, university, flights, etc.
- See the list below for additional suitable topics, feel free to choose your own topic
- Your homework solutions must be within the topic, original, realistic, and non-trivial
- Solutions can only be submitted via a script executed on the corresponding server
- At most 130 points in total can be gained for all the homework assignments
- Solutions are awarded by up to 20 or 15 points each, depending on the assignment
- In case of any shortcomings, fewer points will be awarded appropriately
- Solutions can be submitted even repeatedly, only the latest version is assessed
- Once a given assignment is assessed by the lecturer, it cannot be resubmitted once again
- Delay of one whole day is penalized by 5 points, shorter delays are penalized proportionally
- Should the delay be even longer, the penalty stays the same and does not further increase
- All the homework assignments must be submitted before the intended exam date in order to be considered
- None of the homework assignments is compulsory, yet you are encouraged to submit all of them
- During the practical classes, extra activity points can be acquired, too
- At least 100 points is required for the course credit to be granted
- Half of all the points above this boundary is transferred as bonus points to the exam
- Only students with a course credit already acquired can sign up for the final exam
- The final exam consists of a compulsory written test and an optional oral examination
- At most 100 points can be acquired from the actual final written test
- This test consists of a theoretical part (open and multiple choice questions) and a practical part (exercises)
- Having less than 30% points from any of the two parts prevents from passing the exam successfully
- The final score corresponds to the sum of the written test and bonus points, if any
- Based on the result, everyone can voluntarily choose to undergo an oral examination
- The only condition is to have at least 50 points from the test and bonus points
- In such a case, the final score is further adjusted by up to minus 10 to plus 5 points
- The oral examination can also be requested by the examiner in case of uncertainties in the test
- Final grade: 90 points and more for A, 80+ for B, 70+ for C, 60+ for D, and 50+ for E
Homework Assignments
- Preliminaries:
- NoSQL server: 10.38.6.127:42222 (only accessible at school or via faculty VPN)
- Login and password: sent by e-mail
- Tools:
- Submissions:
- Use sftp or WinSCP to upload your submission files to the NoSQL server
- Put these files into a directory ~/assignments/name/, where name is a name of a given homework
- I.e., xquery, sparql, mapreduce, riak, cassandra, mongodb, neo4j (case sensitive)
- Use ssh or PuTTY to open a remote shell connection to the NoSQL server
- Based on the instructions provided for a given homework assignment, verify that everything is working as expected
- Go to the ~/assignments/ directory and execute submit name, where name is once again the name of the homework
- Wait for the confirmation of success, otherwise your homework is not considered to be submitted
- Should any complications appear, send your solution by e-mail to martin.svoboda@fit.cvut.cz
- Just for your convenience, you can check the submitted files in the ~/submissions/ directory
- Once the homework is assessed, you will find comments in this directory, too
- Requirements:
- Respect the prescribed names of individual files to be submitted (case sensitive)
- Place all the files in the root directory of your submission
- Do not include shared libraries or files that are not requested
- I.e., do not submit files that were not explicitly requested
- Do not redirect or suppress both standard and error outputs in your shell scripts
- All your files must be syntactically correct and executable without errors
1: XQuery
- Points: 20
- Assignment:
- Create an XML document with sample data from the domain of your individual topic
- Work with mutually interlinked entities of at least 3 different types (e.g., lines, flights and tickets)
- Insert data about at least 15 particular entities (e.g., 3 lines, 4 flights, 8 tickets)
- Create expressions for exactly 2 different XPath queries (i.e., not more, not less)
- Use each of the following constructs at least once
- Axes: descendant or descendant-or-self or // abbreviation
- Predicates (all of the following): path expression (existence test), position testing, value comparison, general comparison
- Create expressions for exactly 3 different XQuery queries (that cannot be expressed solely using XPath)
- Use each of the following constructs at least once
- Direct or computed constructor
- FLWOR expression (with at least one FOR clause)
- Conditional expression
- Existential or universal quantifier
- Requirements:
- Both XML document and queries must be well-formed (i.e., syntactically correct)
- Put each XPath / XQuery expression into a standalone file (e.g., xpath1.xp)
- Always add a comment describing the intended query meaning in natural language via (: comment :)
- Each query expression must be evaluated to a non-empty sequence
- Submission:
- data.xml: XML document with your data to be queried
- xpath1.xp and xpath2.xp: files with XPath expressions
- xquery1.xq, xquery2.xq, and xquery3.xq: files with XQuery expressions
- Execution:
- Execute the following shell command to evaluate each individual XPath or XQuery query expression
- saxonb-xquery -s $DataFile $QueryFile
- $DataFile is the input XML document to be queried, i.e., data.xml
- $QueryFile is a file with query expression to be evaluated, e.g., xquery1.xq
- Tools:
- References:
- Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
- Deadline: Monday 10. 10. 2022 until 23:59
2: SPARQL
- Points: 20
- Assignment:
- Create a TTL document with sample RDF triples within your individual topic
- Use the Turtle notation in particular
- Work with mutually interlinked resources of at least 3 different types (e.g., lines, flights and tickets)
- Insert data about at least 15 particular resources (e.g., 3 lines, 4 flights, 8 tickets)
- Use each of the following constructs at least once
- Object list or predicate-object list
- Blank nodes (either using _ prefix or brackets [])
- Create expressions for exactly 5 different SPARQL queries (SELECT query form in particular)
- Use each of the following constructs at least once
- Basic graph pattern
- Group graph pattern
- Optional graph pattern (OPTIONAL)
- Alternative graph pattern (UNION)
- Difference graph pattern (MINUS)
- FILTER constraint
- Aggregation (GROUP BY with or without HAVING clause)
- Sorting (ORDER BY clause)
- Requirements:
- Both TTL document and queries must be well-formed (i.e., syntactically correct)
- Put each SPARQL query expression into a standalone file (e.g., query1.sparql)
- Always add a comment describing the intended query meaning in natural language via # comment
- Each query expression must be evaluated to a non-empty solution sequence
- Both the data file a query files must contain declarations of all prefixes used, including rdf: and similar
- Use @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . in your data file
- Use PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> in your query file
- Do not use FROM clauses in your queries, the input data file will automatically be accessible as the default graph
- Submission:
- data.ttl: TTL document with your RDF data to be queried
- query1.sparql, ..., query5.sparql: files with SPARQL query expressions
- Execution:
- Execute the following shell command to evaluate each individual SPARQL query expression
- sparql --data $DataFile --query $QueryFile
- $DataFile is the input RDF document to be queried, i.e., data.ttl
- $QueryFile is a file with query expression to be evaluated, e.g., query1.sparql
- Tools:
- References:
- Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
- Deadline: Monday 17. 10. 2022 until 23:59
3: MapReduce
- Points: 20
- Assignment:
- Create an input text file with sample data from the domain of your individual topic
- Insert realistic and non-trivial data about at least 10 entities of one type
- Put each of these entities on a separate line, i.e., assume that each line of the input file yields one input record
- Organize the actual entity attributes in whatever way you are able to easily parse
- E.g., Medvídek 2007 53 100 Trojan Macháček Vilhelmová corresponding to a pattern Movie Year Rating Length Actors...
- Implement a non-trivial MapReduce job
- Choose from aggregation, grouping, filtering or any other general MapReduce usage pattern
- Use WordCount.java source file as a basis for your own implementation
- Both the Map and Reduce functions should be non-trivial, each about 10 lines of code
- It is not necessary to implement the Combine function
- Comment the source file and also provide a description of the problem you are solving
- You may also create a shell script that allows for the execution of your entire MapReduce job
- I.e., compile source files, deploy input file, execute the actual job, retrieve its result, ...
- However, this script is not supposed to be submitted and serves just for your own convenience
- Even if you do so, it will not be used for the purpose of homework assessment in any way
- Requirements:
- You may split your MapReduce job implementation into multiple Java source files
- They all must be located in the submission root directory
- At least MapReduce.java source file with its public MapReduce class is required
- This class is expected to represent the main class of the entire MapReduce job
- Do not change the way how command line arguments are processed
- I.e., the only two arguments represent the input and output HDFS locations respectively
- Do not use packages in order to organize your Java source files
- Assume that only hadoop-common-3.3.4.jar and hadoop-mapreduce-client-core-3.3.4.jar libraries will be linked with your project
- Do not submit your Netbeans (or any other) project directory, do not submit Hadoop (or any other) libraries
- Use Java Standard Edition version 7 or newer
- You are free to use your /user/pdb221_login/ HDFS home directory for debugging
- Homework assessment will take place in a different dedicated HDFS directory
- Submission:
- readme.txt: description of the input data structure and objective of the MapReduce job
- input.txt: text file with your sample input data (i.e., only one input file is permitted)
- MapReduce.java and possibly additional *.java: Java source files with your MapReduce implementation
- output.txt: expected output of your MapReduce job (i.e., submit the result of the execution you performed by yourself)
- Tools:
- References:
- Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
- Deadline: Monday 31. 10. 2022 until 23:59
4: Riak
- Points: 15
- Assignment:
- Create a shell script that works with our Riak database via its HTTP interface using the cURL tool
- Insert about 3 key/value objects into at least 3 buckets, each serving for objects of different entity types
- Always include content headers
- One bucket must contain objects with XML values (text/xml)
- One bucket must contain objects with JSON values (application/json)
- As for the third bucket, choose any content type you like
- Design your XML and JSON values so that they can be indexed by Yokozuna
- I.e., do not use Czech or other national accented characters
- Use each of the following type suffixes at least once: _s or _ss (string), _i or _is (integer), and _b or _bs (boolean)
- Associate your search index with at least the two XML and JSON buckets
- Express at least 2 index search queries
- Use each of the following constructs at least once: wildcards, ranges, logical operators
- Remove all your objects at the end of your script (i.e., empty all your buckets)
- Requirements:
- Our Riak cluster is accessible via nodes running at https://10.38.6.127:10021/ or 10022 or 10023
- You must be connected to our NoSQL server via PuTTY / SSH, though
- Otherwise these nodes will not be reachable since the listed ports are blocked from outside
- Use bucket type with name pdb221_login for all buckets you would like to create
- Of course, replace login with your actual login name
- E.g.: /types/pdb221_svobom25/buckets/actors/keys/trojan for a particular object in a bucket of actors
- This bucket type already exists, therefore you do not need to take care of its creation
- However, do not access your buckets directly so that this bucket type can easily be changed
- I.e., write $RIAK_TYPE instead of a fixed name when referencing your bucket type in URLs
- $RIAK_TYPE is a variable provided from outside of your script (see below)
- Its value will correspond to your actual login name pdb221_login, and so your bucket type name
- E.g.: /types/$RIAK_TYPE/buckets/actors/keys/trojan
- Similarly, it is necessary to use the following variables to provide your Riak user name and password
- $RIAK_USER for your login name and $RIAK_PASSWORD for your password
- Your requests will therefore correspond to the following pattern
- curl -i -X GET -u $RIAK_USER:$RIAK_PASSWORD https://10.38.6.127:10021/types/$RIAK_TYPE/buckets/actors/keys/trojan
- When working with your XML and JSON values, it is better to wrap them by single quotes and not double quotes
- The reason is that double quotes are needed by XML attributes and JSON strings
- E.g.: '{ name_s : "Ivan Trojan", year_i : 1964 }'
- Use search index with name corresponding to the pattern pdb221_login_index (i.e., your login name suffixed with _index)
- Do not create this index, it already exists
- Note that you have to associate it with your buckets before you start inserting any objects
- Otherwise your already existing objects will not be indexed
- Note that variables enclosed in single quotes will not be replaced by their values
- You therefore need to place them outside of these single quotes
- E.g.: '{ "props" : { "search_index" : "'$RIAK_USER'_index" } }'
- As expected, do not access your search index directly in your URLs
- Use ${RIAK_USER}_index or $RIAK_USER""_index instead
- This approach is necessary in order not to treat our fixed suffix _index as a part of a variable name
- When preparing your search conditions, you must work carefully
- The reason is that certain characters are treated specifically both by shell as well as in URLs
- First, prepare your actual search condition at the logical level (e.g., (year_i:[1960 TO *]))
- Second, encode unsafe characters: space %20, [ %5B, and ] %5D (e.g., (year_i:%5B1960%20TO%20*%5D))
- Finally, escape round parentheses (e.g., \(year_i:%5B1960%20TO%20*%5D\))
- Note that you also need to escape ampersands in query parameters (e.g., ...\&q=\(year_i:%5B1960%20TO%20*%5D\))
- Your search requests will therefore correspond to the following pattern
- curl -i -X GET -u $RIAK_USER:$RIAK_PASSWORD https://10.38.6.127:10021/search/query/${RIAK_USER}_index?wt=json\&omitHeader=true\&q=\(year_i:%5B1960%20TO%20*%5D\)
- Always comment the intended meaning of search queries in natural language via # comment
- Each search query must be evaluated to a non-empty set of matching results
- Make sure your shell script is executable at all (i.e., has the X permission assigned)
- If you are using WinSCP, just locate your file, go to the properties context menu and add the X permission for the owner
- If you are using PuTTY, execute the following command chmod u+x script.sh
- Also make sure your script can be executed repeatedly without failures
- Only use Linux style of line endings
- I.e., use LF = chr(10) = "\n" instead of CRLF = chr(13).chr(10) = "\r\n" on Windows or CR = chr(13) = "\r" on Mac
- If something is not working as expected, try to execute Riak ping query
- curl -i -X GET -u $RIAK_USER:$RIAK_PASSWORD https://10.38.6.127:10021/ping
Submission:
- script.sh: Bash script allowing to execute all the HTTP requests
Execution:
- First of all, define appropriate values for all the three required variables
- export RIAK_USER="pdb221_login" (your actual login name)
- export RIAK_PASSWORD="MyPassword" (your actual password)
- export RIAK_TYPE=$RIAK_USER (your bucket type)
- Then, execute the following shell command to evaluate the whole Riak script as such
Tools:
- RiakKV 3.0.10 (installed on the NoSQL server)
References:
Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
Deadline: Monday 7. 11. 2022 until 23:59
5: Cassandra
- Points: 15
- Assignment:
- Create a script (ordinary text file) with a sequence of CQL statements working with Cassandra database
- Define a schema for 2 tables for entities of different types
- Define at least one column for each of the following data types: tuple, list, set and map
- Insert about 5 rows into each of your tables
- Express at least 3 update statements
- You must perform replace, add and remove primitive operations (all of them) on columns of all collection types (all of them)
- I.e., you must involve at least altogether 9 different primitive operations on such columns
- Express 3 select statements
- Use WHERE and ORDER BY clauses at least once (both of them)
- Use ALLOW FILTERING in a query that cannot be evaluated without this instruction
- Create at least 1 secondary index
- Requirements:
- Only use your own keyspace when working on the assignment
- This keyspace already exists and its name is identical to your login name (pdb221_login)
- Therefore, do not create this keyspace in your script
- Do not switch to your keyspace when you are inside your script
- I.e., do not execute USE command to change the active keyspace from within the script
- Specify the intended keyspace outside of your script using command line options (see below)
- Also, do not use fully qualified names inside your script (e.g., for tables etc.)
- The reason is that a different dedicated keyspace will be used when assessing your homework
- You can assume that this keyspace will be completely empty at the beginning
- Comments:
- The following error messages can be ignored:
- Error from server: code=1300 [Replica(s) failed to execute read]...
- Submission:
- script.cql: text file with CQL statements
- Execution:
- Execute the following shell command to evaluate the whole CQL script
- cqlsh -u $UserName -p $UserPassword -k $KeyspaceName -f $ScriptFile
- $UserName is your Cassandra user name (i.e., pdb221_login)
- $UserPassword is your Cassandra password (if not specified, you will be prompted)
- $KeyspaceName is a name of keyspace that should be used (e.g., pdb221_login)
- $ScriptFile is a file with CQL queries to be executed (i.e., script.cql)
- Tools:
- References:
- Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
- Deadline: Monday 14. 11. 2022 until 23:59
6: MongoDB
- Points: 20
- Assignment:
- Create a JavaScript script with a sequence of commands working with MongoDB database
- Explicitly create 2 collections for entities of different types
- I.e., create them using createCollection method
- Insert about 5 documents into each one of them
- These documents must be realistic, non-trivial, and with both embedded objects and arrays
- Interlink the documents using references
- Use both insertOne and insertMany operations, each at least once
- Express 2 replace operations
- One ordinary and one with activated upsert mode
- Express 3 update operations
- Two ordinary and one with activated upsert mode
- Use at least 3 different update operators
- Use both updateOne and updateMany operations, each at least once
- Express 5 find queries (with non-trivial selections)
- Use at least one logical operator ($and, $or, $not)
- Use $elemMatch operator on array fields at least once
- Use both positive and negative projection (each at least once)
- Use sort modifier
- Describe the real-world meaning of all your queries in comments
- Express 1 MapReduce query (non-trivial, i.e., not easily expressed using ordinary find operation)
- Describe its meaning, contents of intermediate key-value pairs and the final output
- Note that reduce function must be associative, commutative, and idempotent
- Requirements:
- Call export LC_ALL=C in case you have difficulties in launching the mongo shell
- Only use your own database when working on the assignment
- This database already exists and its name is identical to your login name (pdb221_login)
- Do not switch to your database when you are inside your script
- I.e., do not execute USE database and nor db.getSiblingDB('database') commands
- Specify the intended database outside of your script using command line options (see below)
- Note that a different dedicated database will be used when assessing your homework
- You can assume that this database will be completely empty at the beginning
- Print the output of your MapReduce job using out: { inline: 1 } option
- I.e., do not redirect the output into a standalone collection
- Submission:
- script.js: JavaScript script with MongoDB database commands
- Execution:
- Execute the following shell command to evaluate the whole MongoDB script
- cat $ScriptFile | mongosh $DatabaseName -u $UserName -p $UserPassword --authenticationDatabase admin
- $UserName is your MongoDB user name (i.e., pdb221_login)
- $UserPassword is your MongoDB password (if not specified, you will be prompted)
- $DatabaseName is a name of database that should be used (e.g., pdb221_login)
- $ScriptFile is a file with MongoDB queries to be executed (i.e., script.js)
- Tools:
- References:
- Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
- Deadline: Monday 28. 11. 2022 until 23:59
7: Neo4j
- Points: 20
- Assignment:
- Insert realistic nodes and relationships into your embedded Neo4j database
- Use a single CREATE statement for this purpose
- Insert altogether at least 10 nodes for entities of at least 2 different types (i.e., different labels)
- Insert altogether at least 15 relationships of at least 2 different types
- Include properties (both for nodes and relationships)
- Associate all your nodes with user-defined identifiers
- Express 5 Cypher query expressions
- Use at least once MATCH, OPTIONAL MATCH, RETURN, WITH, WHERE, and ORDER BY (sub)clauses (all of them)
- Aggregation in at least one query
- Requirements:
- Since just a single shared database is available, the following convention needs to be followed
- Prefix names of all node labels and relationship types with your login name (pdb221_login)
- E.g., pdb221_login_ACTOR for a node label or pdb221_login_PLAY for a relationship type
- During the homework assessment, these prefixes will automatically be replaced
- Therefore you can safely assume that there will not exist any nodes and relationships having such prefixes
- Describe the meaning of your Cypher expressions in natural language (via // comment)
- Submission:
- queries.cypher: text file with a sequence of Cypher statements (including CREATE)
- Execution:
- Execute the following shell command to evaluate the whole Neo4j script
- cypher-shell -u $UserName -p $UserPassword -f $ScriptFile --non-interactive --format verbose
- $UserName is your Neo4j user name (i.e., pdb221_login)
- $UserPassword is your Neo4j password (if not specified, you will be prompted)
- $ScriptFile is a file with Cypher queries to be executed (i.e., queries.cypher)
- Tools:
- References:
- Server: 10.38.6.127:42222
- Do not forget to execute the homework submission script!
- Deadline: Monday 5. 12. 2022 until 23:59
Individual Topics
- Try to propose your own original topic in the first place
- You can also get inspired by the following topics (in alphabetical order, in English and in Czech)
-
Access system,
Accommodation booking,
Accommodation comparator,
Accommodation sharing,
Agricultural production,
Air rescue service,
Air traffic management,
Airline,
Airport,
Armory,
Army,
Artworks,
Assignment submission,
ATM network,
Attendance system,
Auction,
Bakery,
Bank,
Bank account,
Bazaar,
Beekeeper,
Betting shop,
Beverages store,
Bike sharing,
Black market,
Blog,
Boat rental,
Bookstore,
Botanic garden,
Brewery,
Building materials store,
Bus station,
Bus tickets,
Business register,
Cadastre,
Cafe,
Canteens,
Car rental,
Car repair shop,
Car showroom,
Casino,
Castles,
Catering,
Caves,
Cemetery,
Cinema,
City tours,
Classbook,
Collection and disposal of waste,
Collection of laws,
College dorm,
Computer games,
Conference,
Construction management,
Content management system,
Contract register,
Convenience store,
Cookbook,
Cooking classes,
Council meetings,
Countries of the world,
Courier service,
Cowshed,
Cryptocurrency exchange,
Dance school,
Deliveries,
Desk games,
Discussion forum,
Doctor's office,
Dog park,
Dog shelter,
Driving school,
Drugs,
Dump,
Educational institution,
Elections,
Electronic prescriptions,
Employee records,
Empty houses,
Entertainment center,
Environmental center,
Exhibition,
Exhibition grounds,
Experience donation,
Fairy tales,
Farmer markets,
Finance manager,
Financial advisory,
Financial markets,
Fire protection,
Fishing equipment,
Fitness center,
Flat owners association,
Fleet,
Flight ticket booking,
Food bank,
Food distribution,
Football league,
Football team,
Forest kindergarten,
Forwarding company,
Foster care,
Gallery,
Garden center,
Gardening colony,
Gas station,
Glassworks,
Golf clubs,
Grant agency,
Grid,
Hair salon,
Handyman,
Hardware,
Health insurance,
High school,
Highway fees,
Hiking trails,
Hobby market,
Hockey league,
Holiday offers,
Horse racing,
Hospital,
Hotel,
Housing association,
Chamber of deputies,
Chess club,
Chess competition,
Chess database,
Incinerator,
Industrial zone,
Insurance company,
Intelligence service,
Intersport arena,
Job offers,
Jurassic park,
Kindergarten,
Knowledge competition,
Laboratory,
Labour office,
Language school,
Lego,
Leisure activities,
Library,
Log book,
Logistics center,
Logistics company,
Logistics warehouse,
Lottery,
Luggage storage,
Manufacturing processes,
Maternity hospital,
Medical reimbursement,
Meeting scheduling,
Menu,
Metro operation,
Military area,
Mobile operator,
Mobile phones,
Model trains,
Morgue,
Mountain rescue service,
Movies,
Multinational company,
Multiplex network,
Museum,
Music festival,
Music production,
Musical instruments,
National parks,
Nature reserve,
Newspaper publishing,
Non-bank loans,
Nuclear power plant,
Nutritional values,
Online exercises,
Online streaming service,
Orienteering,
Outdoor swimming pool,
Parking lot,
Parts catalog,
Patient medical card,
Pawnshop,
Payment cards,
Personal documents,
Personal trainer,
Pharmacy,
Photo album,
Pizzeria,
Plagiarism detection,
Planning calendar,
Police database,
Political parties,
Popular music,
Population register,
Post,
Postal addresses,
Poultry farming,
Prestashop,
Prison,
Procurement,
Project management,
Property administration,
Psychiatric hospital,
Public greenery,
Public transport,
Railway network,
Real estate agency,
Recruitment agency,
Refugee camp,
Registration of sales,
Regulatory fees,
Research projects,
Research publications,
Restaurant,
Restaurant reservations,
Road closures,
Room reservation,
Scout group,
Scrapyard,
Security agency,
Seizures,
Shared travel,
Shooting range,
Shopping center,
Ski school,
Skiing area,
Smart home systems,
Sobering-up cell,
Social benefits,
Social network,
Software development,
Spare parts,
Sports club,
Sports tournament,
Stable,
Statement of work,
Stock exchange,
Student book,
Study abroad,
Study materials,
Study system,
Subsidy programs,
Summer camp,
Supermarket,
Sweet-shop,
Swimming pool,
Symphony orchestra,
Tax office,
Taxi service,
Teahouse,
Theater,
Theater plays,
Time tables,
Tollgates,
Tourism,
Tourist group,
Traffic accidents,
Traffic control center,
Train station,
Transport company,
Transport control,
Travel agency,
Trial,
Truck transport,
TV program,
TV series,
Universe,
Vaccination abroad,
Veterinary clinic,
Video shop,
Virtual tours,
Visas,
War conflicts,
Water park,
Water supply,
Weapons,
Weather forecast,
Webhosting,
Webshop,
Wedding dress rental,
Wholesale,
Winter road cleaning,
World heritage list,
Zoning plan,
Zoo
- Nevertheless, the following topics are not allowed this semester
Exam Requirements
Advanced SQL
- SQL query evaluation and optimization
NoSQL Introduction
- Big Data and NoSQL terms, V characteristics (volume, variety, velocity, veracity, value, validity, volatility), current trends and challenges (Big Data, Big Users, processing paradigms, ...), principles of relational databases (functional dependencies, normal forms, transactions, ACID properties); types of NoSQL systems (key-value, wide column, document, graph, ...), their data models, features and use cases; common features of NoSQL systems (aggregates, schemalessness, scaling, flexibility, sharding, replication, automated maintenance, eventual consistency, ...)
Data Formats
- XML: constructs (element, attribute, text, ...), content model (empty, text, elements, mixed), entities, well-formedness; document and data oriented XML
- JSON: constructs (object, array, value), types of values (strings, numbers, ...); BSON: document structure (elements, type selectors, property names and values)
- RDF: data model (resources, referents, values), triples (subject, predicate, object), statements, blank nodes, IRI identifiers, literals (types, language tags); graph representation (vertices, edges); N-Triples notation (RDF file, statements, triple components, literals, IRI references); Turtle notation (TTL file, prefix definitions, triples, object and predicate-object lists, blank nodes, prefixed names, literals)
- CSV: constructs (document, header, record, field)
XML Databases
- Native XML databases vs. XML-enabled relational databases; data model (XDM): tree (nodes for document, elements, attributes, texts, ...), document order, reverse document order, sequences, atomic values, singleton sequences
- XPath language: path expressions (relative vs. absolute, evaluation algorithm), path step (axis, node test, predicates), axes (forward: child, descendant, following, ...; reverse: parent, ancestor, preceding, ...; attribute), node tests, predicates (path conditions, position testing, ...), abbreviations
- XQuery language: path expressions, direct constructors (elements, attributes, nested queries, well-formedness), computed constructors (dynamic names), FLWOR expressions (for, let, where, order by, and return clauses), typical FLWOR use cases (joining, grouping, aggregation, integration, ...), conditional expressions (if, then, else), switch expressions (case, default, return), universal and existential quantified expressions (some, every, satisfies), comparisons (value, general, node; errors), atomization of values (elements, attributes)
RDF Stores
- Linked Data: principles (identification, standard formats, interlinking, open license), Linked Open Data Cloud
- SPARQL: graph pattern matching (solution sequence, solution, variable binding, compatibility of solutions), graph patterns (basic, group, optional, alternative, graph, minus); prologue declarations (BASE, PREFIX clauses), SELECT queries (SELECT, FROM, and WHERE clauses), query dataset (default graph, named graphs), variable assignments (BIND), FILTER constraints (comparisons, logical connectives, accessors, tests, ...), solution modifiers (DISTINCT, REDUCED; aggregation: GROUP BY, HAVING; sorting: ORDER BY, LIMIT, OFFSET), query forms (SELECT, ASK, DESCRIBE, CONSTRUCT)
MapReduce
- Programming models, paradigms and languages; parallel programming models, process interaction (shared memory, message passing, implicit interaction), problem decomposition (task parallelism, data parallelism, implicit parallelism)
- MapReduce: programming model (data parallelism, map and reduce functions), cluster architecture (master, workers, message passing, data distribution), map and reduce functions (input arguments, emission and reduction of intermediate key-value pairs, final output), data flow phases (mapping, shuffling, reducing), input parsing (input file, split, record), execution steps (parsing, mapping, partitioning, combining, merging, reducing), combine function (commutativity, associativity), additional functions (input reader, partition, compare, output writer), implementation details (counters, fault tolerance, stragglers, task granularity), usage patterns (aggregation, grouping, querying, sorting, ...)
- Apache Hadoop: modules (Common, HDFS, YARN, MapReduce), related projects (Cassandra, HBase, ...); HDFS: data model (hierarchical namespace, directories, files, blocks, permissions), architecture (NameNode, DataNode, HeartBeat messages, failures), replica placement (rack-aware strategy), FsImage (namespace, mapping of blocks, system properties) and EditLog structures, FS commands (ls, mkdir, ...); MapReduce: architecture (JobTracker, TaskTracker), job implementation (Configuration; Mapper, Reducer, and Combiner classes; Context, write method; Writable and WritableComparable interfaces), job execution schema
NoSQL Principles
- Scaling: scalability definition; vertical scaling (scaling up/down), pros and cons (performance limits, higher costs, vendor lock-in, ...); horizontal scaling (scaling out/in), pros and cons, network fallacies (reliability, latency, bandwidth, security, ...), cluster architecture; design questions (scalability, availability, consistency, latency, durability, resilience)
- Distribution models: sharding: idea, motivation, objectives (balanced distribution, workload, ...), strategies (mapping structures, general rules), difficulties (evaluation of requests, changing cluster structure, obsolete or incomplete knowledge, network partitioning, ...); replication: idea, motivation, objectives, replication factor, architectures (master-slave and peer-to-peer), internal details (handling of read and write requests, consistency issues, failure recovery), replica placement strategies; mutual combinations of sharding and replication
- CAP theorem: CAP guarantees (consistency, availability, partition tolerance), CAP theorem, consequences (CA, CP and AP systems), consistency-availability spectrum, ACID properties (atomicity, consistency, isolation, durability), BASE properties (basically available, soft state, eventual consistency)
- Consistency: strong vs. eventual consistency; write consistency (write-write conflict, context, pessimistic and optimistic strategies), read consistency (read-write conflict, context, inconsistency window, session consistency), read and write quora (formulae, motivation, workload balancing)
Key-Value Stores
- Data model (key-value pairs), key management (real-world identifiers, automatically generated, structured keys, prefixes), basic CRUD operations, use cases, representatives, extended functionality (MapReduce, TTL, links, structured store, ...)
- Riak: data model (buckets, objects, metadata headers); HTTP interface, cURL tool (options); CRUD operations (POST, PUT, GET, and DELETE methods, structure of URLs, data, headers), buckets operations (buckets, keys, properties); links (definition, headers, tags, link walking, navigational steps: bucket, tag and keep components), data types (Convergent Replicated Data Types: register, flag, counter, set, map; conflict resolution policies; usage restrictions), Search 2.0 Yokozuna (architecture; indexation and query evaluation processes; extractors: text, XML, JSON; SOLR document: extracted and technical fields; indexing schema: tokens, triples; full-text index creation, association and usage; query patterns: wildcards, ranges, ...); causal context (motivation, low-level techniques: timestamps, vector clocks, ...); vector clocks (logical clocks, vector of clocks, message passing); Riak Ring (physical vs. virtual nodes, consistent hashing, partitions, replica placement strategy, hinted handoff, handling of read and write requests)
Wide Column Stores
- Data model (column families, rows, columns), query patterns, use cases, representatives
- Cassandra: data model (keyspaces, tables, rows, columns), primary keys (partition key, clustering columns), column values (missing; empty; native data types, tuples, user-defined types; collections: lists, sets, maps; frozen mode), additional data (TTL, timestamp); CQL language: DDL statements: CREATE KEYSPACE (replication options), DROP KEYSPACE, USE keyspace, CREATE TABLE (column definitions, usage of types, primary key), DROP TABLE, TRUNCATE TABLE; native data types (int, varint, double, boolean, text, timestamp, ...); literals (atomic, collections, ...); DML statements: SELECT statements (SELECT, FROM, WHERE, GROUP BY, ORDER BY, and LIMIT clauses; DISTINCT modifier; selectors; non/filtering queries, ALLOW FILTERING mode; filtering relations; aggregates; restrictions on sorting and aggregation), INSERT statements (update parameters: TTL, TIMESTAMP), UPDATE statements (assignments; modification of collections: additions, removals), DELETE statements (deletion of rows, removal of columns, removal of items from collections)
Document Stores
- Data model (documents), query patterns, use cases, representatives
- MongoDB: data model (databases, collections, documents, field names), document identifiers (features, ObjectId), data modeling (embedded documents, references); CRUD operations (insert, update, save, remove, find); insert operation (management of identifiers); update operation: replace vs. update mode, multi option, upsert mode, update operators (field: $set, $rename, $inc, ...; array: $push, $pop, ...); save operation (insert vs. replace mode); remove operation (justOne option); find operation: query conditions (value equality vs. query operators), query operators (comparison: $eq, $ne, ...; element: $exists; evaluation: $regex, ...; logical: $and, $or, $not; array: $all, $elemMatch, ...), dot notation (embedded fields, array items), querying of arrays, projection (positive, negative), projection operators (array: $slice, $elemMatch), modifiers (sort, skip, limit); MapReduce (map function, reduce function, options: query, sort, limit, out); primary and secondary index structures (index types: value, hashed, ...; forms; properties: unique, partial, sparse, TTL)
Graph Databases
- Data model (property graphs), use cases, representatives
- Neo4j: data model (graph, nodes, relationships, directions, labels, types, properties), properties (fields, atomic values, arrays); embedded database mode; traversal framework: traversal description, order (breadth-first, depth-first, branch ordering policies), expanders (relationship types, directions), uniqueness (NODE_GLOBAL, RELATIONSHIP_GLOBAL, ...), evaluators (INCLUDE/EXCLUDE and CONTINUE/PRUNE results; predefined evaluators: all, excludeStartPosition, ...; custom evaluators: evaluate method), traverser (starting nodes, iteration modes: paths, end nodes, last relationships); Java interface (labels, types, nodes, relationships, properties, transactions); Cypher language: graph matching (solutions, variable bindings); query sub/clauses (read, write, general); path patterns, node patterns (variable, labels, properties), relationship patterns (variable, types, properties, variable length); MATCH clause (path patterns, WHERE conditions, uniqueness requirement, OPTIONAL mode); RETURN clause (DISTINCT modifier, ORDER BY, LIMIT, SKIP subclauses, aggregation); WITH clause (motivation, subclauses); write clauses: CREATE, DELETE (DETACH mode), SET (properties, labels), REMOVE (properties, labels); query structure (chaining of clauses, query parts, restrictions)
Recommended Literature
- Sadalage, Pramod J. - Fowler, Martin: NoSQL Distilled.
ISBN: 978-0-321-82662-6.
Pearson Education, Inc., 2013.
- Wiese, Lena: Advanced Data Management: For SQL, NoSQL, Cloud and Distributed Databases.
ISBN: 978-3-11-044140-6 (hardcover), 978-3-11-044141-3 (eBook PDF), 978-3-11-043307-4 (eBook EPUB).
DOI: 10.1515/9783110441413.
Walter de Gruyter GmbH, 2015.
- Zomaya, Albert Y. - Sakr, Sherif: Handbook of Big Data Technologies.
ISBN: 978-3-319-49339-8 (hardcover), 978-3-319-49340-4 (eBook).
DOI: 10.1007/978-3-319-49340-4.
Springer International Publishing AG, 2017.