검색
아하에서 찾은 1,000건의 질문
- 이비인후과의료상담Q. 영양제 과다복용,부작용 여부 질문안녕하세요, 2년전부터 얼굴피부문제 때문에 각종 영양제들을 복용하기시작했는데, 지금까지도 습관적으로 정기복용중입니다, 헌데 제가 간혹 가다 사람들한테 너무 과다복용하는 것같다, 양을 좀 줄여야할것같다 이런소리를 간간히 들어서 자문을 좀 구하려고합니다.매일 또는 2일에 1회1알씩 복용하고있습니다.아래는 현재까지 복용중인 영양제의 1캡슐 또는 1타블렛당 용량입니다.비타민 C 1000mg / 비타민 D 400IU / 비타민 E 400IU / 판토텐산 500mg / 아연 50mg / 엘시스테인 500mg / 셀레늄 200mcg / 글루코사민 750mg / 프로바이오틱스 캡슐 10억유산균하루에 섭취하는 영양제 용량입니다,1일섭취량치고는 너무 많은건가요?영양제 캡슐, 타블렛 정제성분으로인해 몸에 무리가 갈 수 있나요 ?답변 부탁드립니다 !
- 생활꿀팁생활Q. 엑셀수식을 어떻게 많들어야 할까요?A값과 B값을 비교 C 값과 D값을 비교 해서 만약 A값이 B값보다 크면 X라는 문자 출력 및 A+B값 실행 C값이D값보다 작으면 Y라는 문자 출력 및 C-B값 실행 A값이 B값보다 크면서 C값이D값보다 작으면 (위의 두조건을 동시에 만족할경우) Z라는 문자 출력및 E라는 값을 출력 위의 두조건을 동시에 만족하지 않는다면 숫자 0 값을 출력 위와 같은 수식을 짜려고 합니다 IF()함수를 이용해서 짜려고 하는데 2개의 경우의 수만 지원해서 위와 같이 여러조건을 비교해서 값을 출력하는 수식은 만들지 못하겠네요 고수님들의 답 부탁드리겠습니다.
- 기타 의료상담의료상담Q. 비타민을 먹으면 코로나를 예방할 수 있나요?면역력 강화를 위해 2년전 쯤 부터 종합비타민을 꾸준히 먹고 있습니다비타민A C D E등등을 포함하고 있는 비타민인데요아침 저녁으로 하루 두 알씩 먹고있습니다종합비타민으로 코로나19를 예방할 수 있을까요?
- 생활꿀팁생활Q. 안드로이드 스튜디오 게시판 기능 구현 코드TextView ant; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); tv=findViewById(R.id.ant); } public void clickBtn(View view) { try { InputStream is= assetManager.open("jsons/an3172.json"); InputStreamReader isr= new InputStreamReader(is); BufferedReader reader= new BufferedReader(isr); StringBuffer buffer= new StringBuffer(); String line= reader.readLine(); while (line!=null){ buffer.append(line+"\n"); line=reader.readLine(); } String jsonData= buffer.toString(); JSONArray jsonArray= new JSONArray(jsonData); String s="";String name= jo.getString("name"); String msg= jo.getString("msg"); JSONObject flag=jo.getJSONObject("flag"); int a= flag.getInt("a"); int b= flag.getInt("b"); s += name+" : "+msg+"==>"+a+","+b+"\n"; } tv.setText(s); } catch (IOException e) {e.printStackTrace();} catch (JSONException e) {e.printStackTrace(); } 이게 자바 클래스에서 구현한 json의 예시본인데 만약 s += name+" : "+msg+"==>"+a+","+b+"\n";이것을 어떻게 해야지 2줄의 배열로 나타낼수있을까요?[{"name": "sam","msg": "Hello world", "flag": {a": 10, "b": 20}},{"name": "robin","msg": "Nice to meet you", "flag": {"a": 100,"b": 200}}]Json코드입니다. 저렇게 되어있는 부분을 두줄로 불러드릴려면 s += name+" : "+msg+"==>"+a+","+b+"\n"; 이코드를 어떤식으로 변형을해야지 json을 불러들었을때 한줄이아닌 두줄로 나타낼수있을까요?예제의 출처:https://lcw126.tistory.com/m/101
- 명예훼손·모욕법률Q. 명예훼손 성립여부 관련 질문드립니다.거래 글에서a. 판매자의 이전 글에 물품을 구매하기로 하고 돈을 송금했다가 2~3주쯤 못받았다는 댓글이 종종 보임b. 구글에서도 판매자의 아이디를 검색하면 해당 정황이 확인됨(현재는 삭제되었으나, 검색 결과에는 몇줄 정도로 "xxxx 사기 의심~~"이라는 결과가 남아 있음)c. 이렇게 돈을 묶어놓는 방식은 주로 폰지사기라는 방식의 사기에 쓰임(구매자 돈을 계속 돌려막기)d. 판매자의 글에 댓글로 "구매 전 꼭 네이버에 이사람 아이디(닉네임)을 검색해보세요~ 폰지사기 의심됨" 이라고 적음e. 명예훼손죄로 고소했다고 연락이 옴특정성이 제 댓글에는 아이디와 닉네임(그 글에서 바로 확인가능) 밖에 없어 힘들다고 생각했으나, 네이버 그사람 아이디를 검색하면 폰지사기 피해사례글(지금은 삭제됨)에 그 사람 이름과 전화번호, 계좌번호가 있음.1. 이런 경우 특정성이 성립하나요? 저는 아이디와 닉네임만을 적시했습니다.2. 특정성이 성립한 경우, 폰지사기 의심됨 이라는 말이 명예훼손에 해당할까요?3. 저는 a(다른 구매자들이 돈을 몇주간 묶여있다가 환불받음, 여러 차례)를 보고 폰지사기 의심이라고 적은 것인데, 이를 공익을 위한 사실의 적시로 주장할 수 있을까요? a에 대해서는 댓글이 아무리 달려도 판매자는 답변조차 하지 않았었습니다.4. 특정성과 공연성이 성립한다고 해도, 이러한 대수롭지 않은 경우가 실제 기소유예 이상 급으로 갈 수도 있나요?5. 실제로 제가 고소당한거라면, 고소당했다는 사실을 언제쯤 인지할 수 있을까요?어떻게 대응해야 좋을지를 여쭤보고 싶습니다.
- 피부과의료상담Q. 크몽 개인 프리랜서 종합소득세 / 연말정산 어떻게 해야하나요?안녕하세요. 현재 크몽에서 디자인으로 활동하고 있는 프리랜서입니다.작년 6월부터 지금까지 약 1,300만원 정도 판매가 됐습니다.사업자는 없고, 개인 전문가로 활동하고 있는데 세금으로 궁금한 점이 있어 질문 드립니다.국세청에서 종합 소득세 장부 작성 (간편장부 대상자) 라고 문자가 왔는데 미리 준비해야 하나요?프리랜서의 경우 종합 소득세 E유형으로 알고 있는데 맞을까요?프리랜서는 3.3% 때는 걸로 알고 있습니다.다만 크몽은 수수료가 있어 판매 금액만 1,300만원이지 실제 수입은 약 1,000만원 입니다.이 경우는 판매 금액으로 신청해야 하나요?장부 작성 시 크몽 수수료에 대한 부분은 무엇으로 작성해야 하나요?작년에 1~4월에 알바를 했었고, 이후 프리랜서로 활동했는데 이 경우는 어떻게 등록해야 하나요?프리랜서의 경우, 정확히 수입이 얼마 이상부터 사업자 등록을 해야 하나요?('매출액 기준으로서는 연간 2,400만 원 이상의 매출이 발생하면 다음 연도에 사업자 등록을 하세요.'라는 내용을 봤는데, 이게 찾아보니까 기준 금액이 다 달라서 여쭤봅니다!)그동안 단기알바 하면서 종합소득세 신고는 잘 했었는데,이번에 프리랜서를 하면서 어려운 점이 많아 이렇게 질문이 많아졌네요..!답변 기다리겠습니다! 정말 감사합니다.
- 생활꿀팁생활Q. 파이썬 코드 idle python 에서 오류?안녕하세요 제가 코딩을 공부하던중 http://solarisailab.com/archives/486 의 코드를 사용하였습니다코드는""" TensorFlow translation of the torch example found here (written by SeanNaren). https://github.com/SeanNaren/TorchQLearningExample Original keras example found here (written by Eder Santana). https://gist.github.com/EderSantana/c7222daa328f0e885093#file-qlearn-py-L164 The agent plays a game of catch. Fruits drop from the sky and the agent can choose the actions left/stay/right to catch the fruit before it reaches the ground."""import tensorflow.compat.v1 as tftf.disablev2behavior()import numpy as npimport randomimport mathimport os# Parametersepsilon = 1 # The probability of choosing a random action (in training). This decays as iterations increase. (0 to 1)epsilonMinimumValue = 0.001 # The minimum value we want epsilon to reach in training. (0 to 1)nbActions = 3 # The number of actions. Since we only have left/stay/right that means 3 actions.epoch = 1001 # The number of games we want the system to run for.hiddenSize = 100 # Number of neurons in the hidden layers.maxMemory = 500 # How large should the memory be (where it stores its past experiences).batchSize = 50 # The mini-batch size for training. Samples are randomly taken from memory till mini-batch size.gridSize = 10 # The size of the grid that the agent is going to play the game on.nbStates = gridSize * gridSize # We eventually flatten to a 1d tensor to feed the network.discount = 0.9 # The discount is used to force the network to choose states that lead to the reward quicker (0 to 1) learningRate = 0.2 # Learning Rate for Stochastic Gradient Descent (our optimizer).# Create the base model.X = tf.placeholder(tf.float32, [None, nbStates])W1 = tf.Variable(tf.truncated_normal([nbStates, hiddenSize], stddev=1.0 / math.sqrt(float(nbStates))))b1 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01)) input_layer = tf.nn.relu(tf.matmul(X, W1) + b1)W2 = tf.Variable(tf.truncated_normal([hiddenSize, hiddenSize],stddev=1.0 / math.sqrt(float(hiddenSize))))b2 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01))hiddenlayer = tf.nn.relu(tf.matmul(inputlayer, W2) + b2)W3 = tf.Variable(tf.truncated_normal([hiddenSize, nbActions],stddev=1.0 / math.sqrt(float(hiddenSize))))b3 = tf.Variable(tf.truncated_normal([nbActions], stddev=0.01))outputlayer = tf.matmul(hiddenlayer, W3) + b3# True labelsY = tf.placeholder(tf.float32, [None, nbActions])# Mean squared error cost functioncost = tf.reducesum(tf.square(Y-outputlayer)) / (2*batchSize)# Stochastic Gradient Decent Optimizeroptimizer = tf.train.GradientDescentOptimizer(learningRate).minimize(cost)# Helper function: Chooses a random value between the two boundaries.def randf(s, e): return (float(random.randrange(0, (e - s) * 9999)) / 10000) + s;# The environment: Handles interactions and contains the state of the environmentclass CatchEnvironment(): def init(self, gridSize): self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.state = np.empty(3, dtype = np.uint8) # Returns the state of the environment. def observe(self): canvas = self.drawState() canvas = np.reshape(canvas, (-1,self.nbStates)) return canvas def drawState(self): canvas = np.zeros((self.gridSize, self.gridSize)) canvas[self.state[0]-1, self.state[1]-1] = 1 # Draw the fruit. # Draw the basket. The basket takes the adjacent two places to the position of basket. canvas[self.gridSize-1, self.state[2] -1 - 1] = 1 canvas[self.gridSize-1, self.state[2] -1] = 1 canvas[self.gridSize-1, self.state[2] -1 + 1] = 1 return canvas # Resets the environment. Randomly initialise the fruit position (always at the top to begin with) and bucket. def reset(self): initialFruitColumn = random.randrange(1, self.gridSize + 1) initialBucketPosition = random.randrange(2, self.gridSize + 1 - 1) self.state = np.array([1, initialFruitColumn, initialBucketPosition]) return self.getState() def getState(self): stateInfo = self.state fruit_row = stateInfo[0] fruit_col = stateInfo[1] basket = stateInfo[2] return fruitrow, fruitcol, basket # Returns the award that the agent has gained for being in the current environment state. def getReward(self): fruitRow, fruitColumn, basket = self.getState() if (fruitRow == self.gridSize - 1): # If the fruit has reached the bottom. if (abs(fruitColumn - basket) <= 1): # Check if the basket caught the fruit. return 1 else: return -1 else: return 0 def isGameOver(self): if (self.state[0] == self.gridSize - 1): return True else: return False def updateState(self, action): if (action == 1): action = -1 elif (action == 2): action = 0 else: action = 1 fruitRow, fruitColumn, basket = self.getState() newBasket = min(max(2, basket + action), self.gridSize - 1) # The min/max prevents the basket from moving out of the grid. fruitRow = fruitRow + 1 # The fruit is falling by 1 every action. self.state = np.array([fruitRow, fruitColumn, newBasket]) #Action can be 1 (move left) or 2 (move right) def act(self, action): self.updateState(action) reward = self.getReward() gameOver = self.isGameOver() return self.observe(), reward, gameOver, self.getState() # For purpose of the visual, I also return the state.# The memory: Handles the internal memory that we add experiences that occur based on agent's actions,# and creates batches of experiences based on the mini-batch size for training.class ReplayMemory: def init(self, gridSize, maxMemory, discount): self.maxMemory = maxMemory self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.discount = discount canvas = np.zeros((self.gridSize, self.gridSize)) canvas = np.reshape(canvas, (-1,self.nbStates)) self.inputState = np.empty((self.maxMemory, 100), dtype = np.float32) self.actions = np.zeros(self.maxMemory, dtype = np.uint8) self.nextState = np.empty((self.maxMemory, 100), dtype = np.float32) self.gameOver = np.empty(self.maxMemory, dtype = np.bool) self.rewards = np.empty(self.maxMemory, dtype = np.int8) self.count = 0 self.current = 0 # Appends the experience to the memory. def remember(self, currentState, action, reward, nextState, gameOver): self.actions[self.current] = action self.rewards[self.current] = reward self.inputState[self.current, ...] = currentState self.nextState[self.current, ...] = nextState self.gameOver[self.current] = gameOver self.count = max(self.count, self.current + 1) self.current = (self.current + 1) % self.maxMemory def getBatch(self, model, batchSize, nbActions, nbStates, sess, X): # We check to see if we have enough memory inputs to make an entire batch, if not we create the biggest # batch we can (at the beginning of training we will not have enough experience to fill a batch). memoryLength = self.count chosenBatchSize = min(batchSize, memoryLength) inputs = np.zeros((chosenBatchSize, nbStates)) targets = np.zeros((chosenBatchSize, nbActions)) # Fill the inputs and targets up. for i in xrange(chosenBatchSize): if memoryLength == 1: memoryLength = 2 # Choose a random memory experience to add to the batch. randomIndex = random.randrange(1, memoryLength) current_inputState = np.reshape(self.inputState[randomIndex], (1, 100)) target = sess.run(model, feeddict={X: currentinputState}) current_nextState = np.reshape(self.nextState[randomIndex], (1, 100)) currentoutputs = sess.run(model, feeddict={X: current_nextState}) # Gives us Q_sa, the max q for the next state. nextStateMaxQ = np.amax(current_outputs) if (self.gameOver[randomIndex] == True): target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] else: # reward + discount(gamma) * max_a' Q(s',a') # We are setting the Q-value for the action to r + gamma*max a' Q(s', a'). The rest stay the same # to give an error of 0 for those outputs. target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] + self.discount * nextStateMaxQ # Update the inputs and targets. inputs[i] = current_inputState targets[i] = target return inputs, targets def main(_): print("Training new model") # Define Environment env = CatchEnvironment(gridSize) # Define Replay Memory memory = ReplayMemory(gridSize, maxMemory, discount) # Add ops to save and restore all the variables. saver = tf.train.Saver() winCount = 0 with tf.Session() as sess: tf.initializeallvariables().run() for i in xrange(epoch): # Initialize the environment. err = 0 env.reset() isGameOver = False # The initial state of the environment. currentState = env.observe() while (isGameOver != True): action = -9999 # action initilization # Decides if we should choose a random action, or an action from the policy network. global epsilon if (randf(0, 1) <= epsilon): action = random.randrange(1, nbActions+1) else: # Forward the current state through the network. q = sess.run(outputlayer, feeddict={X: currentState}) # Find the max index (the chosen action). index = q.argmax() action = index + 1 # Decay the epsilon by multiplying by 0.999, not allowing it to go below a certain threshold. if (epsilon > epsilonMinimumValue): epsilon = epsilon * 0.999 nextState, reward, gameOver, stateInfo = env.act(action) if (reward == 1): winCount = winCount + 1 memory.remember(currentState, action, reward, nextState, gameOver) # Update the current state and if the game is over. currentState = nextState isGameOver = gameOver # We get a batch of training data to train the model. inputs, targets = memory.getBatch(output_layer, batchSize, nbActions, nbStates, sess, X) # Train the network which returns the error. , loss = sess.run([optimizer, cost], feeddict={X: inputs, Y: targets}) err = err + loss print("Epoch " + str(i) + ": err = " + str(err) + ": Win count = " + str(winCount) + " Win ratio = " + str(float(winCount)/float(i+1)*100)) # Save the variables to disk. save_path = saver.save(sess, os.getcwd()+"/model.ckpt") print("Model saved in file: %s" % save_path)if name == 'main': tf.app.run()""" TensorFlow translation of the torch example found here (written by SeanNaren). https://github.com/SeanNaren/TorchQLearningExample Original keras example found here (written by Eder Santana). https://gist.github.com/EderSantana/c7222daa328f0e885093#file-qlearn-py-L164 The agent plays a game of catch. Fruits drop from the sky and the agent can choose the actions left/stay/right to catch the fruit before it reaches the ground."""import tensorflow.compat.v1 as tftf.disablev2behavior()import numpy as npimport randomimport mathimport os# Parametersepsilon = 1 # The probability of choosing a random action (in training). This decays as iterations increase. (0 to 1)epsilonMinimumValue = 0.001 # The minimum value we want epsilon to reach in training. (0 to 1)nbActions = 3 # The number of actions. Since we only have left/stay/right that means 3 actions.epoch = 1001 # The number of games we want the system to run for.hiddenSize = 100 # Number of neurons in the hidden layers.maxMemory = 500 # How large should the memory be (where it stores its past experiences).batchSize = 50 # The mini-batch size for training. Samples are randomly taken from memory till mini-batch size.gridSize = 10 # The size of the grid that the agent is going to play the game on.nbStates = gridSize * gridSize # We eventually flatten to a 1d tensor to feed the network.discount = 0.9 # The discount is used to force the network to choose states that lead to the reward quicker (0 to 1) learningRate = 0.2 # Learning Rate for Stochastic Gradient Descent (our optimizer).# Create the base model.X = tf.placeholder(tf.float32, [None, nbStates])W1 = tf.Variable(tf.truncated_normal([nbStates, hiddenSize], stddev=1.0 / math.sqrt(float(nbStates))))b1 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01)) input_layer = tf.nn.relu(tf.matmul(X, W1) + b1)W2 = tf.Variable(tf.truncated_normal([hiddenSize, hiddenSize],stddev=1.0 / math.sqrt(float(hiddenSize))))b2 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01))hiddenlayer = tf.nn.relu(tf.matmul(inputlayer, W2) + b2)W3 = tf.Variable(tf.truncated_normal([hiddenSize, nbActions],stddev=1.0 / math.sqrt(float(hiddenSize))))b3 = tf.Variable(tf.truncated_normal([nbActions], stddev=0.01))outputlayer = tf.matmul(hiddenlayer, W3) + b3# True labelsY = tf.placeholder(tf.float32, [None, nbActions])# Mean squared error cost functioncost = tf.reducesum(tf.square(Y-outputlayer)) / (2*batchSize)# Stochastic Gradient Decent Optimizeroptimizer = tf.train.GradientDescentOptimizer(learningRate).minimize(cost)# Helper function: Chooses a random value between the two boundaries.def randf(s, e): return (float(random.randrange(0, (e - s) * 9999)) / 10000) + s;# The environment: Handles interactions and contains the state of the environmentclass CatchEnvironment(): def init(self, gridSize): self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.state = np.empty(3, dtype = np.uint8) # Returns the state of the environment. def observe(self): canvas = self.drawState() canvas = np.reshape(canvas, (-1,self.nbStates)) return canvas def drawState(self): canvas = np.zeros((self.gridSize, self.gridSize)) canvas[self.state[0]-1, self.state[1]-1] = 1 # Draw the fruit. # Draw the basket. The basket takes the adjacent two places to the position of basket. canvas[self.gridSize-1, self.state[2] -1 - 1] = 1 canvas[self.gridSize-1, self.state[2] -1] = 1 canvas[self.gridSize-1, self.state[2] -1 + 1] = 1 return canvas # Resets the environment. Randomly initialise the fruit position (always at the top to begin with) and bucket. def reset(self): initialFruitColumn = random.randrange(1, self.gridSize + 1) initialBucketPosition = random.randrange(2, self.gridSize + 1 - 1) self.state = np.array([1, initialFruitColumn, initialBucketPosition]) return self.getState() def getState(self): stateInfo = self.state fruit_row = stateInfo[0] fruit_col = stateInfo[1] basket = stateInfo[2] return fruitrow, fruitcol, basket # Returns the award that the agent has gained for being in the current environment state. def getReward(self): fruitRow, fruitColumn, basket = self.getState() if (fruitRow == self.gridSize - 1): # If the fruit has reached the bottom. if (abs(fruitColumn - basket) <= 1): # Check if the basket caught the fruit. return 1 else: return -1 else: return 0 def isGameOver(self): if (self.state[0] == self.gridSize - 1): return True else: return False def updateState(self, action): if (action == 1): action = -1 elif (action == 2): action = 0 else: action = 1 fruitRow, fruitColumn, basket = self.getState() newBasket = min(max(2, basket + action), self.gridSize - 1) # The min/max prevents the basket from moving out of the grid. fruitRow = fruitRow + 1 # The fruit is falling by 1 every action. self.state = np.array([fruitRow, fruitColumn, newBasket]) #Action can be 1 (move left) or 2 (move right) def act(self, action): self.updateState(action) reward = self.getReward() gameOver = self.isGameOver() return self.observe(), reward, gameOver, self.getState() # For purpose of the visual, I also return the state.# The memory: Handles the internal memory that we add experiences that occur based on agent's actions,# and creates batches of experiences based on the mini-batch size for training.class ReplayMemory: def init(self, gridSize, maxMemory, discount): self.maxMemory = maxMemory self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.discount = discount canvas = np.zeros((self.gridSize, self.gridSize)) canvas = np.reshape(canvas, (-1,self.nbStates)) self.inputState = np.empty((self.maxMemory, 100), dtype = np.float32) self.actions = np.zeros(self.maxMemory, dtype = np.uint8) self.nextState = np.empty((self.maxMemory, 100), dtype = np.float32) self.gameOver = np.empty(self.maxMemory, dtype = np.bool) self.rewards = np.empty(self.maxMemory, dtype = np.int8) self.count = 0 self.current = 0 # Appends the experience to the memory. def remember(self, currentState, action, reward, nextState, gameOver): self.actions[self.current] = action self.rewards[self.current] = reward self.inputState[self.current, ...] = currentState self.nextState[self.current, ...] = nextState self.gameOver[self.current] = gameOver self.count = max(self.count, self.current + 1) self.current = (self.current + 1) % self.maxMemory def getBatch(self, model, batchSize, nbActions, nbStates, sess, X): # We check to see if we have enough memory inputs to make an entire batch, if not we create the biggest # batch we can (at the beginning of training we will not have enough experience to fill a batch). memoryLength = self.count chosenBatchSize = min(batchSize, memoryLength) inputs = np.zeros((chosenBatchSize, nbStates)) targets = np.zeros((chosenBatchSize, nbActions)) # Fill the inputs and targets up. for i in xrange(chosenBatchSize): if memoryLength == 1: memoryLength = 2 # Choose a random memory experience to add to the batch. randomIndex = random.randrange(1, memoryLength) current_inputState = np.reshape(self.inputState[randomIndex], (1, 100)) target = sess.run(model, feeddict={X: currentinputState}) current_nextState = np.reshape(self.nextState[randomIndex], (1, 100)) currentoutputs = sess.run(model, feeddict={X: current_nextState}) # Gives us Q_sa, the max q for the next state. nextStateMaxQ = np.amax(current_outputs) if (self.gameOver[randomIndex] == True): target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] else: # reward + discount(gamma) * max_a' Q(s',a') # We are setting the Q-value for the action to r + gamma*max a' Q(s', a'). The rest stay the same # to give an error of 0 for those outputs. target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] + self.discount * nextStateMaxQ # Update the inputs and targets. inputs[i] = current_inputState targets[i] = target return inputs, targets def main(_): print("Training new model") # Define Environment env = CatchEnvironment(gridSize) # Define Replay Memory memory = ReplayMemory(gridSize, maxMemory, discount) # Add ops to save and restore all the variables. saver = tf.train.Saver() winCount = 0 with tf.Session() as sess: tf.initializeallvariables().run() for i in xrange(epoch): # Initialize the environment. err = 0 env.reset() isGameOver = False # The initial state of the environment. currentState = env.observe() while (isGameOver != True): action = -9999 # action initilization # Decides if we should choose a random action, or an action from the policy network. global epsilon if (randf(0, 1) <= epsilon): action = random.randrange(1, nbActions+1) else: # Forward the current state through the network. q = sess.run(outputlayer, feeddict={X: currentState}) # Find the max index (the chosen action). index = q.argmax() action = index + 1 # Decay the epsilon by multiplying by 0.999, not allowing it to go below a certain threshold. if (epsilon > epsilonMinimumValue): epsilon = epsilon * 0.999 nextState, reward, gameOver, stateInfo = env.act(action) if (reward == 1): winCount = winCount + 1 memory.remember(currentState, action, reward, nextState, gameOver) # Update the current state and if the game is over. currentState = nextState isGameOver = gameOver # We get a batch of training data to train the model. inputs, targets = memory.getBatch(output_layer, batchSize, nbActions, nbStates, sess, X) # Train the network which returns the error이. , loss = sess.run([optimizer, cost], feeddict={X: inputs, Y: targets}) err = err + loss print("Epoch " + str(i) + ": err = " + str(err) + ": Win count = " + str(winCount) + " Win ratio = " + str(float(winCount)/float(i+1)*100)) # Save the variables to disk. save_path = saver.save(sess, os.getcwd()+"/model.ckpt") print("Model saved in file: %s" % save_path)if name == 'main': tf.app.run() 입니다그런데 이런 오류가 생겼습니다WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\compat\v2compat.py:65: disableresourcevariables (from tensorflow.python.ops.variablescope) is deprecated and will be removed in a future version.Instructions for updating:non-resource variables are not supported in the long termTraining new modelWARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\util\tfshoulduse.py:198: initializeallvariables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.Instructions for updating:Use tf.globalvariablesinitializer instead.W0820 22:17:13.656675 9068 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\util\tfshoulduse.py:198: initializeallvariables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.Instructions for updating:Use tf.globalvariablesinitializer instead.Traceback (most recent call last): File "C:\Windows\system32\python", line 267, in <module> tf.app.run() File "C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\platform\app.py", line 40, in run run(main=main, argv=argv, flagsparser=parseflagstolerateundef) File "C:\ProgramData\Anaconda3\envs\tens_2\lib\site-packages\absl\app.py", line 299, in run runmain(main, args) File "C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\absl\app.py", line 250, in run_main sys.exit(main(argv)) File "C:\Windows\system32\python", line 216, in main for i in xrange(epoch):NameError: name 'xrange' is not defined어떻게 해결해야 할까요?매우 길지만 해결해 주시면 감사하겠습니다 ㅠㅠ
- 구조조정고용·노동Q. 중요한 기술의 유출시도 등 근로자의 의심스러운 정황이 포착되는 긴급한 상황에서 근로자의 동의 없이 그의 개인용 PC의 로그기록, E-메일, 휴대전화 통화기록 등을 기업이 직접 조사하는 것이 허용되나요?며칠 전 뉴스보도에 따르면 전기차의 핵심부품인 2차배터리의 기술을 KAIST 교수가 중국의 업체로 유출하려 시도한 사실이 밝혀졌습니다. 이와 같이기업의 중요한 기술의 유출시도 등 근로자의 의심스러운 정황이 포착되는 긴급한 상황에서 근로자의 동의 없이 그의 개인용 PC의 로그기록, E-메일, 휴대전화 통화기록 등을 기업이 직접 조사하는 것이 허용되는지 알고 싶습니다.
- 생활꿀팁생활Q. php 배열에서 특정 기준에 맞는 배열만 가져올 수 있을까요?예를 들어$all_array = array('a'=>'1', 'b'=>'2', 'c'=>'3', 'd'=>'4', 'e'=>'5'); 이 있습니다.그리고 $find_array = array('b', 'd', 'e'); 라는 배열도 있습니다.제가 하고 싶은 것은 $all_array에서 $find_array에 해당되는 값을 구하고 싶습니다.$result_array를array('b'=>'2', 'd'=>'4', 'e'=>'5'); 로 나오게 하던가,array('2', '4', '5');로 나오게 하려는 건데 방법이 있을까요?
- 생활꿀팁생활Q. 빌리보이님에게 다시 질문드려요. lambda fileUpload전체 소스를 간략하게 올리겠습니다. serverless를 통해서 handler에 오고 handler.ts의 제가 사용하는 소스부분입니다.export async function getFunctionExcel(event: lambda.APIGatewayProxyEvent, context: lambda.Context) { context.callbackWaitsForEmptyEventLoop = false; connection = await db.getConnection(); const result = await app.getFunctionExcel(connection, event, (event.queryStringParameters as any), context); return result.getAPIGatewayProxyResult(); }app.ts 부분입니다.export async function getFunctionExcel(connection: Connection, event: any, params: Parameter, context: lambda.Context) { const total = [...] //엑셀안에 들어갈 내용으로 db에서 가져온 데이터 const resultFileName = common.createExcel('excel_', total); // filename , 데이터 return DefaultResponse.getSuccess({ message: '목록 조회에 성공했습니다.', data: { resultFileName, } }) }common.ts 부분입니다.export function createExcel(fileName: string, totalList: any) { let key = ''; try { // step 1. workbook 생성 let wb = XLSX.utils.book_new(); // step 2. 시트 만들기 [key: 항목 , val: 값] 한글명칭사용하려면 key가 한글이어야 함 let newWorksheet = XLSX.utils.json_to_sheet(totalList); // step 3. workbook에 새로만든 워크시트에 이름을 주고 붙인다. XLSX.utils.book_append_sheet(wb, newWorksheet, 'Sheet0'); // step 4. 엑셀 파일 만들기 const file = XLSX.write(wb, {type: "buffer"}); // step 5. s3 업로드, key를 front에 전달 fileName += moment().format('YYYYMMDD_HHmmss') + '.xlsx'; key = 'excel' + `/` + fileName; fileService.uploadFileStream(file, key); } catch (e) { console.error(e.message); } return key; }fileService.ts 부분입니다.export async function uploadFileStream(fileStream: any, key: string, bucketName: string = '') { bucketName = bucketName || 'myStorage'; const uploadParams = {Bucket: bucketName, Key: key, Body: fileStream}; console.log('5. 엑셀 파일 업로드'); await s3.putObject(uploadParams, (err, data) => { console.log('7. s3.putObject callback function'); if (data) { console.log("Upload Success", data); } else if (err) { console.log("Error", err); } }); console.log('6. 엑셀 파일 업로드 태그 확인'); }https://www.a-ha.io/questions/423161060adbe5b38b999c975b4f867b 먼저 이 도움 부분으로 fileupload 는 했습니다. (local에서만 돌아서 문제에요 ㅠㅠ)https://www.a-ha.io/questions/4086d320f3d129488b9ee562df4b04d8?recBy=KP7T7V어제 알려주신것 중 예시 2) async await 사용하기 는 callback부분이 없으면 로컬에서도 실패합니다.회사내부에서 callback(), Promise() 는 사용을 안하는 방향으로 하고 있어서 내부 callback으로 해결해야하는데 내부 콜백을.... 안들어오네요 '7. s3.putObject callback function' 이게 찍히면 일단 들어왔다 판단이라도 할텐데...정말 도움주셔서 감사했습니다. ㅎㅎ