tensorflow queue

def simple_shuffle_batch(source, capacity, batch_size=10):
  # Create a random shuffle queue.
  queue = tf.RandomShuffleQueue(capacity=capacity,
                                min_after_dequeue=int(0.9*capacity),
                                shapes=source.shape, dtypes=source.dtype)

  # Create an op to enqueue one item.
  enqueue = queue.enqueue(source)

  # Create a queue runner that, when started, will launch 4 threads applying
  # that enqueue op.
  num_threads = 4
  qr = tf.train.QueueRunner(queue, [enqueue] * num_threads)

  # Register the queue runner so it can be found and started by
  # <a href="../../api_docs/python/tf/train/start_queue_runners"><code>tf.train.start_queue_runners</code></a> later (the threads are not launched yet).
  tf.train.add_queue_runner(qr)

  # Create an op to dequeue a batch
  return queue.dequeue_many(batch_size)
  
  
  # create a dataset that counts from 0 to 99
input = tf.constant(list(range(100)))
input = tf.data.Dataset.from_tensor_slices(input)
input = input.make_one_shot_iterator().get_next()

# Create a slightly shuffled batch from the sorted elements
get_batch = simple_shuffle_batch(input, capacity=20)

# `MonitoredSession` will start and manage the `QueueRunner` threads.
with tf.train.MonitoredSession() as sess:
  # Since the `QueueRunners` have been started, data is available in the
  # queue, so the `sess.run(get_batch)` call will not hang.
  while not sess.should_stop():
    print(sess.run(get_batch))


이렇게하면 dataset에서 특정 batch 만큼 계속 아웃풋을 내준다.

import tensorflow as tf

def simple_shuffle_batch(source, capacity, batch_size=5):
  queue = tf.RandomShuffleQueue(capacity=capacity, min_after_dequeue=int(0.9*capacity), shapes = source.shape, dtypes=source.dtype)
  enqueue = queue.enqueue(source)
  num_threads = 4
  qr = tf.train.QueueRunner(queue, [enqueue] * num_threads)
  tf.train.add_queue_runner(qr)
  return queue.dequeue_many(batch_size)

inp = tf.constant(list(range(100)))
inp = tf.data.Dataset.from_tensor_slices(inp)
inp = inp.make_one_shot_iterator().get_next()

print (inp.shape)
print (inp)

get_batch = simple_shuffle_batch(inp, capacity=20)

with tf.train.MonitoredSession() as sess:
  while not sess.should_stop():
    print(sess.run(get_batch))

thread version.... added



import tensorflow as tf
import numpy as np
import threading

def enqueue_thread(coord, sess, t_id):
  with coord.stop_on_exception():
    while not coord.should_stop():
      print("enqueue op " ,t_id, sess.run(enqueue_op, feed_dict={img_p: list(next(it))}))

def dequeue_thread(coord,sess,t_id):
  while not coord.should_stop() :
    # do something
    print("dequeue op ",t_id ,sess.run(dequeue_op))
    if coord.should_stop() :
      coord.request_stop()

def data_iterator():
  while True:
    yield np.random.rand(10,2,2)
#    for img in imgs:
#      yield img


# make enqueue operation
it = data_iterator()
# make queue
q = tf.FIFOQueue(capacity= 100, dtypes=[tf.float64])
img_p = tf.placeholder(tf.float64, [10, 2, 2])
enqueue_op = q.enqueue(img_p)
# make dequeue operation
dequeue_op = q.dequeue()

with tf.Session() as sess:
  coord = tf.train.Coordinator()
 
  # make enqueue thread and start that
  enqueue_threads = [threading.Thread(target=enqueue_thread, args=(coord,sess,i)) for i in xrange(3)]
  for t in enqueue_threads:
    t.start()


  # make dequeue thread and start that
  dequeue_threads = [threading.Thread(target=dequeue_thread, args=(coord,sess,i)) for i in xrange(3)]
  for t in dequeue_threads:
    t.start()
 
  coord.join(dequeue_threads)
  coord.join(enqueue_threads)
 







arm v8 ttbr

Arm v8에는 세가지 virtual memory space가 존재한다.
하나는 el0,1이 함께 존재한다. 두 el의 분리를 위해 ttbr0,1이 각각 el0와 el1에 대한 translatio  base address를 다르게 지정한다. 그 범위는 tcr el1 레지스터로 한다.
El2, el3는 각각 독립된 주소공간을 지니며 그 tba는 ttbr0 el2 , ttbr0 el3에 존재한다. 두 영역에는 ttbr1은 없다. ttbr1은 운영체제 커널 영역과 유저영역의 분리를 위한 것이며, el2,3는 운영체제의 영역이 아니기 때문이다.


Ch.3.2

write buffer 기술 in detail

Write buffer는 쓰기 latency를 올리기 위해 l1 d cache와 하부 메모리(l2 or dram) 사이에 존재하는 메모리다.
Write buffer는 cache 갱신정책에 따라서 동작이 다르다. 먼저 write through 정책을 사용할 시, write주소와 데이터가 w.b와 l1 d cache에 동시에 전달되고, 추후 하부 메모리 갱신이 버스 유휴시간에 완료된다.
Write back 정책이 사용될 시 w.b는 victim buffer라 불린다. V.b의 가장 큰 특징은 버퍼 내 데이터에 cpu가 접근할 수 있다는 것이다. 이 조건이 만족하지 않을 시 메모리 하부 시스템과 v.b사이 일관성이 깨질 수 있다.
Victim cache (intel): https://en.m.wikipedia.org/wiki/Victim_cache
Victim cache policy확인하기

SEDEX_KES_2018 전시회

I visited sedex & kes 2018 conference.
Because those were held together, i could attend in one day.

I focused on the business or research related to vision intelligence.
I could find out several companies using deep learning to detect object.
Especially, some research groups at ETRI and a company called DEEPR I have been developing accelerator for the deep learning.
Because those were quite similar to what i was doing in company now, I asked several questions and took picture of demo.








The object of this project is to make neural network model adaptive.
General neural network platform would not update the model if the training phase was ended.
But etri's platform update the model based on the user's input.
Specifically, this platform run application that identify the hand-written Korean letter.
Based on the writer's style of letter, the model is updated quickly.
The point is the update of the model should be very fast but the computing power is very low on the edge devices.
To overcome this, research group make their platform manipulate not only cpu but the mali gpu that is embedded inside of most of ARM-based edge devices.
Anyway, the demo is quite cool the update process is vert fast and the model is totally tuned to recognize my writing style.





























docker 사용법 정리 기술 in detail

docker run --net=host -ti -v /home:/home tensorflow/tensorflow:nightly-devel bash

docker run -it -w /tensorflow -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" tensorflow/tensorflow:nightly-devel bash

-it bash : The -it instructs Docker to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the container.

-w /tensorflow :working directory
-v : mount the volume


HOST $ sudo docker run --security-opt="apparmor=unconfined" --cap-add=SYS_PTRACE  --net=host -ti -v /home:/home --name (새로운 docker 이미지 이름) nvdla/vp

-net : network host 사용
-name : 새로운 이미지 이름

이미지 수정한 다음에 저장할 경우 ...

Then get the container id using this command:

sudo docker ps -l

Commit changes to the container:

sudo docker commit <container_id> iman/ping 

1 2 3 4 5 6 7 8 9 10 다음